title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Policy APIs | Policy APIs OpenShift Container Platform 4.17 Reference guide for policy APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/policy_apis/index |
Chapter 9. General Updates | Chapter 9. General Updates Incorrect information about the expected default settings of services in Red Hat Enterprise Linux 7 The module of Preupgrade Assistant that handles initscripts provides incorrect information about the expected default settings of the services in Red Hat Enterprise Linux 7 according to the /usr/lib/systemd/system-preset/90-default.preset file in Red Hat Enterprise Linux 7 and according to the current settings of the Red Hat Enterprise Linux 6 system. In addition, the module does not check the default settings of the system but only the settings for the runlevel used during the processing of the check script, which might not be the default runlevel of the system. As a consequence, initscripts are not handled in the anticipated way and the new system needs more manual action than expected. However, the user is informed about the settings that will be chosen for relevant services, despite the presumable default settings. (BZ#1366671) Installing from a USB flash drive fails on UEFI systems The efidisk.img file is required to create a bootable USB drive that will work on a system with UEFI firmware. In this release, a problem during the compose build process has caused this file to be generated incorrectly, and as a result, the file is not usable for booting. As a workaround, use one of the alternate means of booting the installer on UEFI systems: Burn one of the provided boot ISO images (boot.iso or the full installation DVD) to a CD or DVD, and boot using an optical drive Mount one of the ISO images as a CD or DVD drive Set up a PXE server and boot from the network (BZ#1588352) In-place upgrade from a RHEL 6 system to RHEL 7 is impossible with FIPS mode enabled When upgrading a RHEL 6 system to RHEL 7 using the Red Hat Upgrade Tool with FIPS mode enabled, missing Hash-based Message Authentication Code (HMAC) prevents kernel data from being correctly verified. As a consequence, the Red Hat Upgrade Tool cannot boot into the target system kernel and the process fails. The recommended approach is to perform a clean installation instead. In case the administrator disables FIPS mode for the duration of the upgrade, all cryptographic keys must be regenerated and the FIPS compliance of the converted system must be reevaluated. For more information, see How can I make RHEL 6/7/8 FIPS 140-2 compliant? . (BZ# 1612340 ) In-place upgrade on IBM Z is impossible if the LDL format is used The Linux Disk Layout (LDL) format is unsupported on RHEL 7. Consequently, on the IBM Z architecture, if a partition is formatted with LDL on one or more Direct Access Storage Devices (DASD), the Preupgrade Assistent indicates this as an extreme risk, and the Red Hat Upgrade Tool does not start the upgrade process to prevent a data loss on such a partition. To work around this problem, migrate to the Common Disk Layout (CDL) format. To check which DASD format is in use, run: The command output will show the following result for the CDL format: or this result for the LDL format: Note that without applying the RHBA-2019:0411 update, a data loss can occur because the Preupgrade Assistant was previously unable to detect the LDL format. (BZ#1618926) The Preupgrade Assistant reports notchecked if certain packages are missing on the system If certain required packages are not installed on the system, the Preupgrade Assistant triggered by the preupg command fails to perform the preupgrade assessment. Consequently, the test summary displays the notchecked result keyword on each line. To work around this problem: Install the 64-bit versions of the openscap , openscap-engine-sce , and openscap-utils packages. It is recommended to remove their 32-bit versions if they are installed. Run the preupg command again. (BZ#1804691) | [
"dasdview -x <disc>",
"format : hex 2 dec 2 CDL formatted",
"format : hex 1 dec 1 LDL formatted"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/known_issues_general_updates |
Chapter 5. Additional resources | Chapter 5. Additional resources 5.1. Tag specifications by integration type Tagging standards differ between integration types. To use the same tags/labels across integrations, you must use the most common of all the restrictions across the different providers. The following table summarizes tagging and labelling criteria across AWS, Azure and OpenShift Container Platform 4: Table 5.1. Tagging specifications by integration Criteria AWS Azure Google Cloud Red Hat OpenShift Name Tags Tags Labels Labels Format Key & value Name & value Key & value Key & value Keys: [prefix/]name Prefix: must be a DNS subdomain Allows empty value Yes Yes Yes Yes Unique label per key Yes Yes Yes Yes Case sensitive Yes No Only lowercase letters Yes Limit per resource 50 50 (15 for storage) 64 N/A Length of key 128 512 (128 for storage) 63 253(prefix) / 63(name) Length of value 256 256 63 63 Allowed characters Letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @ Tag names cannot contain these characters: <, >, %, &, \, ?, / Only lowercase letters, numeric characters, underscores, and dashes. The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between Restrictions The prefix "aws:" is reserved. Tags applied to EC2 can use any character. Not all resource types support tags. Not all resource types support tags. Generalized VMs do not support tags. Tags applied to the resource group are not inherited by the resources. Keys must start with a lowercase letter or international character. Prefixes kubernetes.io/ and k8s.io/ are reserved. Not all resource types support tags. Notes You need to select the tag keys that will be included in cost and usage files and billing reports. You can use a JSON string to go over the limit of keys. There is no limit on how many labels you can apply across all resources within a project. If the prefix is omitted, the label Key is presumed to be private to the user. 5.2. Further reading The following links provide further guidance on tagging for each integration type. AWS: AWS tagging strategies IAM: Add a specific tag with specific values OpenShift: Kubernetes labels and selectors Kubernetes user guide: labels Microsoft Azure: Azure resource naming and tagging decision guide Azure recommended naming and tagging conventions Use tags to organize your Azure resources and management hierarchy Enforce tags in Azure resource groups Google Cloud: Creating and managing labels | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/managing_cost_data_using_tagging/assembly-additional-resources-tags |
Chapter 1. Cloud integrations on the Hybrid Cloud Console | Chapter 1. Cloud integrations on the Hybrid Cloud Console You can integrate some public clouds and third-party applications with the Hybrid Cloud Console. For information about integrating third-party applications to receive event notifications, see Integrating the Red Hat Hybrid Cloud Console with third-party applications . A cloud integration on the Red Hat Hybrid Cloud Console is an association with a public cloud service, application, or provider that supplies data to a Hybrid Cloud Console service. Services on the Hybrid Cloud Console use the integrations service to connect with public cloud providers and other services or tools to collect information for the service. You can integrate the following public clouds with the Hybrid Cloud Console: Amazon Web Services (AWS) Microsoft Azure Google Cloud Oracle Cloud You can also connect your Red Hat OpenShift Container Platform environment to the Hybrid Cloud Console as a cloud integration to use with the cost management service on the console. You can add and manage cloud and Red Hat integrations from the Integrations* page, located in the Hybrid Cloud Console Settings menu. The Integrations service uses a wizard to help you connect cloud and Red Hat integrations to the Hybrid Cloud Console. For cloud integrations, you can associate the provider with Red Hat services, including cost management, launch images, and the Red Hat Enterprise Linux (RHEL) management bundle. For Red Hat integrations, you can add Red Hat OpenShift Container Platform. Associating a service is optional for cloud integrations, but is required for Red Hat integrations. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/about-cloud-integrations_crc-cloud-integrations |
5.147. libexif | 5.147. libexif 5.147.1. RHSA-2012:1255 - Moderate: libexif security update Updated libexif packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libexif packages provide an Exchangeable image file format (Exif) library. Exif allows metadata to be added to and read from certain types of image files. Security Fix CVE-2012-2812 , CVE-2012-2813 , CVE-2012-2814 , CVE-2012-2836 , CVE-2012-2837 , CVE-2012-2840 , CVE-2012-2841 Multiple flaws were found in the way libexif processed Exif tags. An attacker could create a specially-crafted image file that, when opened in an application linked against libexif, could cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. Red Hat would like to thank Dan Fandrich for reporting these issues. Upstream acknowledges Mateusz Jurczyk of the Google Security Team as the original reporter of CVE-2012-2812, CVE-2012-2813, and CVE-2012-2814; and Yunho Kim as the original reporter of CVE-2012-2836 and CVE-2012-2837. Users of libexif are advised to upgrade to these updated packages, which contain backported patches to resolve these issues. All running applications linked against libexif must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libexif |
2.5. Data Security for Library Mode | 2.5. Data Security for Library Mode 2.5.1. Subject and Principal Classes To authorize access to resources, applications must first authenticate the request's source. The JAAS framework defines the term subject to represent a request's source. The Subject class is the central class in JAAS. A Subject represents information for a single entity, such as a person or service. It encompasses the entity's principals, public credentials, and private credentials. The JAAS APIs use the existing Java 2 java.security.Principal interface to represent a principal, which is a typed name. During the authentication process, a subject is populated with associated identities, or principals. A subject may have many principals. For example, a person may have a name principal (John Doe), a social security number principal (123-45-6789), and a user name principal (johnd), all of which help distinguish the subject from other subjects. To retrieve the principals associated with a subject, two methods are available: getPrincipals() returns all principals contained in the subject. getPrincipals(Class c) returns only those principals that are instances of class c or one of its subclasses. An empty set is returned if the subject has no matching principals. Note The java.security.acl.Group interface is a sub-interface of java.security.Principal , so an instance in the principals set may represent a logical grouping of other principals or groups of principals. Report a bug 2.5.2. Obtaining a Subject In order to use a secured cache in Library mode, you must obtain a javax.security.auth.Subject . The Subject represents information for a single cache entity, such as a person or a service. Red Hat JBoss Data Grid allows a JAAS Subject to be obtained either by using your container's features, or by using a third-party library. In JBoss containers, this can be done using the following: The Subject must be populated with a set of Principals, which represent the user and groups it belongs to in your security domain, for example, an LDAP or Active Directory. The Java EE API allows retrieval of a container-set Principal through the following methods: Servlets: ServletRequest.getUserPrincipal() EJBs: EJBContext.getCallerPrincipal() MessageDrivenBeans: MessageDrivenContext.getCallerPrincipal() The mapper is then used to identify the principals associated with the Subject and convert them into roles that correspond to those you have defined at the container level. A Principal is only one of the components of a Subject, which is retrieved from the java.security.AccessControlContext . Either the container sets the Subject on the AccessControlContext , or the user must map the Principal to an appropriate Subject before wrapping the call to the JBoss Data Grid API using a Security.doAs() method. Once a Subject has been obtained, the cache can be interacted with in the context of a PrivilegedAction. Example 2.7. Obtaining a Subject The Security.doAs() method is in place of the typical Subject.doAs() method. Unless the AccessControlContext must be modified for reasons specific to your application's security model, using Security.doAs() provides a performance advantage. To obtain the current Subject, use Security.getSubject(); , which will retrieve the Subject from either the JBoss Data Grid context, or from the AccessControlContext . Report a bug 2.5.3. Subject Authentication Subject Authentication requires a JAAS login. The login process consists of the following points: An application instantiates a LoginContext and passes in the name of the login configuration and a CallbackHandler to populate the Callback objects, as required by the configuration LoginModule s. The LoginContext consults a Configuration to load all the LoginModules included in the named login configuration. If no such named configuration exists the other configuration is used as a default. The application invokes the LoginContext.login method. The login method invokes all the loaded LoginModule s. As each LoginModule attempts to authenticate the subject, it invokes the handle method on the associated CallbackHandler to obtain the information required for the authentication process. The required information is passed to the handle method in the form of an array of Callback objects. Upon success, the LoginModule s associate relevant principals and credentials with the subject. The LoginContext returns the authentication status to the application. Success is represented by a return from the login method. Failure is represented through a LoginException being thrown by the login method. If authentication succeeds, the application retrieves the authenticated subject using the LoginContext.getSubject method. After the scope of the subject authentication is complete, all principals and related information associated with the subject by the login method can be removed by invoking the LoginContext.logout method. The LoginContext class provides the basic methods for authenticating subjects and offers a way to develop an application that is independent of the underlying authentication technology. The LoginContext consults a Configuration to determine the authentication services configured for a particular application. LoginModule classes represent the authentication services. Therefore, you can plug different login modules into an application without changing the application itself. The following code shows the steps required by an application to authenticate a subject. Developers integrate with an authentication technology by creating an implementation of the LoginModule interface. This allows an administrator to plug different authentication technologies into an application. You can chain together multiple LoginModule s to allow for more than one authentication technology to participate in the authentication process. For example, one LoginModule may perform user name/password-based authentication, while another may interface to hardware devices such as smart card readers or biometric authenticators. The life cycle of a LoginModule is driven by the LoginContext object against which the client creates and issues the login method. The process consists of two phases. The steps of the process are as follows: The LoginContext creates each configured LoginModule using its public no-arg constructor. Each LoginModule is initialized with a call to its initialize method. The Subject argument is guaranteed to be non-null. The signature of the initialize method is: public void initialize(Subject subject, CallbackHandler callbackHandler, Map sharedState, Map options) The login method is called to start the authentication process. For example, a method implementation might prompt the user for a user name and password and then verify the information against data stored in a naming service such as NIS or LDAP. Alternative implementations might interface to smart cards and biometric devices, or simply extract user information from the underlying operating system. The validation of user identity by each LoginModule is considered phase 1 of JAAS authentication. The signature of the login method is boolean login() throws LoginException . A LoginException indicates failure. A return value of true indicates that the method succeeded, whereas a return value of false indicates that the login module should be ignored. If the LoginContext 's overall authentication succeeds, commit is invoked on each LoginModule . If phase 1 succeeds for a LoginModule , then the commit method continues with phase 2 and associates the relevant principals, public credentials, and/or private credentials with the subject. If phase 1 fails for a LoginModule , then commit removes any previously stored authentication state, such as user names or passwords. The signature of the commit method is: boolean commit() throws LoginException . Failure to complete the commit phase is indicated by throwing a LoginException . A return of true indicates that the method succeeded, whereas a return of false indicates that the login module should be ignored. If the LoginContext 's overall authentication fails, then the abort method is invoked on each LoginModule . The abort method removes or destroys any authentication state created by the login or initialize methods. The signature of the abort method is boolean abort() throws LoginException . Failure to complete the abort phase is indicated by throwing a LoginException . A return of true indicates that the method succeeded, whereas a return of false indicates that the login module should be ignored. To remove the authentication state after a successful login, the application invokes logout on the LoginContext . This in turn results in a logout method invocation on each LoginModule . The logout method removes the principals and credentials originally associated with the subject during the commit operation. Credentials should be destroyed upon removal. The signature of the logout method is: boolean logout() throws LoginException . Failure to complete the logout process is indicated by throwing a LoginException . A return of true indicates that the method succeeded, whereas a return of false indicates that the login module should be ignored. When a LoginModule must communicate with the user to obtain authentication information, it uses a CallbackHandler object. Applications implement the CallbackHandler interface and pass it to the LoginContext , which send the authentication information directly to the underlying login modules. Login modules use the CallbackHandler both to gather input from users, such as a password or smart card PIN, and to supply information to users, such as status information. By allowing the application to specify the CallbackHandler , underlying LoginModule s remain independent from the different ways applications interact with users. For example, a CallbackHandler 's implementation for a GUI application might display a window to solicit user input. On the other hand, a CallbackHandler implementation for a non-GUI environment, such as an application server, might simply obtain credential information by using an application server API. The CallbackHandler interface has one method to implement: The Callback interface is the last authentication class we will look at. This is a tagging interface for which several default implementations are provided, including the NameCallback and PasswordCallback used in an earlier example. A LoginModule uses a Callback to request information required by the authentication mechanism. LoginModule s pass an array of Callback s directly to the CallbackHandler.handle method during the authentication's login phase. If a callbackhandler does not understand how to use a Callback object passed into the handle method, it throws an UnsupportedCallbackException to abort the login call. Report a bug 2.5.4. Authorization Using a SecurityManager In Red Hat JBoss Data Grid's Remote Client-Server mode, authorization is able to work without a SecurityManager for basic cache operations. In Library mode, a SecurityManager may also be used to perform some of the more complex tasks, such as distexec, map/reduce, and query. In order to enforce access restrictions, enable the SecurityManager in your JVM using one of the following methods: Command Line Programmaticaly Using the JDK's default implementation is not required, however an appropriate policy file must be supplied. The JBoss Data Grid distribution includes an example policy file, which demonstrates the permissions required by some of JBoss Data Grid's JAR files. These permissions must be integrated with those required by your application. Report a bug 2.5.5. Security Manager in Java 2.5.5.1. About the Java Security Manager Java Security Manager The Java Security Manager is a class that manages the external boundary of the Java Virtual Machine (JVM) sandbox, controlling how code executing within the JVM can interact with resources outside the JVM. When the Java Security Manager is activated, the Java API checks with the security manager for approval before executing a wide range of potentially unsafe operations. The Java Security Manager uses a security policy to determine whether a given action will be permitted or denied. Report a bug 2.5.5.2. About Java Security Manager Policies Security Policy A set of defined permissions for different classes of code. The Java Security Manager compares actions requested by applications against the security policy. If an action is allowed by the policy, the Security Manager will permit that action to take place. If the action is not allowed by the policy, the Security Manager will deny that action. The security policy can define permissions based on the location of code, on the code's signature, or based on the subject's principals. The Java Security Manager and the security policy used are configured using the Java Virtual Machine options java.security.manager and java.security.policy . Basic Information A security policy's entry consists of the following configuration elements, which are connected to the policytool : CodeBase The URL location (excluding the host and domain information) where the code originates from. This parameter is optional. SignedBy The alias used in the keystore to reference the signer whose private key was used to sign the code. This can be a single value or a comma-separated list of values. This parameter is optional. If omitted, presence or lack of a signature has no impact on the Java Security Manager. Principals A list of principal_type / principal_name pairs, which must be present within the executing thread's principal set. The Principals entry is optional. If it is omitted, it signifies that the principals of the executing thread will have no impact on the Java Security Manager. Permissions A permission is the access which is granted to the code. Many permissions are provided as part of the Java Enterprise Edition 6 (Java EE 6) specification. This document only covers additional permissions which are provided by JBoss EAP 6. Important Refer to your container documentation on how to configure the security policy, as it may differ depending on the implementation. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.5.5.3. Write a Java Security Manager Policy Introduction An application called policytool is included with most JDK and JRE distributions, for the purpose of creating and editing Java Security Manager security policies. Detailed information about policytool is linked from http://docs.oracle.com/javase/6/docs/technotes/tools/ . Procedure 2.1. Setup a new Java Security Manager Policy Start policytool . Start the policytool tool in one of the following ways. Red Hat Enterprise Linux From your GUI or a command prompt, run /usr/bin/policytool . Microsoft Windows Server Run policytool.exe from your Start menu or from the bin\ of your Java installation. The location can vary. Create a policy. To create a policy, select Add Policy Entry . Add the parameters you need, then click Done . Edit an existing policy Select the policy from the list of existing policies, and select the Edit Policy Entry button. Edit the parameters as needed. Delete an existing policy. Select the policy from the list of existing policies, and select the Remove Policy Entry button. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.5.5.4. Run Red Hat JBoss Data Grid Server Within the Java Security Manager To specify a Java Security Manager policy, you need to edit the Java options passed to the server instance during the bootstrap process. For this reason, you cannot pass the parameters as options to the standalone.sh script. The following procedure guides you through the steps of configuring your instance to run within a Java Security Manager policy. Prerequisites Before you following this procedure, you need to write a security policy, using the policytool command which is included with your Java Development Kit (JDK). This procedure assumes that your policy is located at JDG_HOME /bin/server.policy . As an alternative, write the security policy using any text editor and manually save it as JDG_HOME /bin/server.policy The JBoss Data Grid server must be completely stopped before you edit any configuration files. Perform the following procedure for each physical host or instance in your environment. Procedure 2.2. Configure the Security Manager for JBoss Data Grid Server Open the configuration file. Open the configuration file for editing. This location of this file is listed below by OS. Note that this is not the executable file used to start the server, but a configuration file that contains runtime parameters. For Linux: JDG_HOME /bin/standalone.conf For Windows: JDG_HOME \bin\standalone.conf.bat Add the Java options to the file. To ensure the Java options are used, add them to the code block that begins with: You can modify the -Djava.security.policy value to specify the exact location of your security policy. It should go onto one line only, with no line break. Using == when setting the -Djava.security.policy property specifies that the security manager will use only the specified policy file. Using = specifies that the security manager will use the specified policy combined with the policy set in the policy.url section of JAVA_HOME /lib/security/java.security . Important JBoss Enterprise Application Platform releases from 6.2.2 onwards require that the system property jboss.modules.policy-permissions is set to true . Example 2.8. standalone.conf Example 2.9. standalone.conf.bat Start the server. Start the server as normal. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug | [
"public Set getPrincipals() {...} public Set getPrincipals(Class c) {...}",
"Subject subject = SecurityContextAssociation.getSubject();",
"import org.infinispan.security.Security; Security.doAs(subject, new PrivilegedExceptionAction<Void>() { public Void run() throws Exception { cache.put(\"key\", \"value\"); } });",
"CallbackHandler handler = new MyHandler(); LoginContext lc = new LoginContext(\"some-config\", handler); try { lc.login(); Subject subject = lc.getSubject(); } catch(LoginException e) { System.out.println(\"authentication failed\"); e.printStackTrace(); } // Perform work as authenticated Subject // // Scope of work complete, logout to remove authentication info try { lc.logout(); } catch(LoginException e) { System.out.println(\"logout failed\"); e.printStackTrace(); } // A sample MyHandler class class MyHandler implements CallbackHandler { public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof NameCallback) { NameCallback nc = (NameCallback)callbacks[i]; nc.setName(username); } else if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback)callbacks[i]; pc.setPassword(password); } else { throw new UnsupportedCallbackException(callbacks[i], \"Unrecognized Callback\"); } } } }",
"void handle(Callback[] callbacks) throws java.io.IOException, UnsupportedCallbackException;",
"java -Djava.security.manager",
"System.setSecurityManager(new SecurityManager());",
"if [ \"xUSDJAVA_OPTS\" = \"x\" ]; then",
"JAVA_OPTS=\"USDJAVA_OPTS -Djava.security.manager -Djava.security.policy==USDPWD/server.policy -Djboss.home.dir=USDJBOSS_HOME -Djboss.modules.policy-permissions=true\"",
"set \"JAVA_OPTS=%JAVA_OPTS% -Djava.security.manager -Djava.security.policy==\\path\\to\\server.policy -Djboss.home.dir=%JBOSS_HOME% -Djboss.modules.policy-permissions=true\""
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/sect-data_security_for_library_mode |
Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version | Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version If you have multiple versions of Red Hat build of OpenJDK installed with the archive on RHEL 8, you can select a specific Red Hat build of OpenJDK version to use system-wide. Prerequisites Know the locations of the Red Hat build of OpenJDK versions installed using the archive. Procedure To specify the Red Hat build of OpenJDK version to use for a single session: Configure JAVA_HOME with the path to the Red Hat build of OpenJDK version you want used system-wide. USD export JAVA_HOME=/opt/jdk/jdk-1.8.0 Add USDJAVA_HOME/bin to the PATH environment variable. USD export PATH="USDJAVA_HOME/bin:USDPATH" To specify the Red Hat build of OpenJDK version to use permanently for a single user, add these commands into ~/.bashrc : To specify the Red Hat build of OpenJDK version to use permanently for all users, add these commands into /etc/bashrc : Note If you do not want to redefine JAVA_HOME , add only the PATH command to bashrc , specifying the path to the Java binary. For example, export PATH="/opt/jdk/jdk-1.8.0/bin:USDPATH" Additional resources For more information about the exact meaning of JAVA_HOME , see Changes/Decouple system java setting from java command setting . | [
"export JAVA_HOME=/opt/jdk/jdk-1.8.0 export PATH=\"USDJAVA_HOME/bin:USDPATH\"",
"export JAVA_HOME=/opt/jdk/jdk-1.8.0 export PATH=\"USDJAVA_HOME/bin:USDPATH\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_for_rhel/selecting-systemwide-archive-openjdk8-version |
Chapter 11. Precaching glance images into nova | Chapter 11. Precaching glance images into nova When you configure OpenStack Compute to use local ephemeral storage, glance images are cached to quicken the deployment of instances. If an image that is necessary for an instance is not already cached, it is downloaded to the local disk of the Compute node when you create the instance. The process of downloading a glance image takes a variable amount of time, depending on the image size and network characteristics such as bandwidth and latency. If you attempt to start an instance, and the image is not available on the on the Ceph cluster that is local, launching an instance will fail with the following message: You see the following in the Compute service log: The instance fails to start due to a parameter in the nova.conf configuration file called never_download_image_if_on_rbd , which is set to true by default for DCN deployments. You can control this value using the heat parameter NovaDisableImageDownloadToRbd which you can find in the dcn-storage.yaml file. If you set the value of NovaDisableImageDownloadToRbd to false prior to deploying the overcloud, the following occurs: The Compute service (nova) will automatically stream images available at the central location if they are not available locally. You will not be using a COW copy from glance images. The Compute (nova) storage will potentially contain multiple copies of the same image, depending on the number of instances using it. You may saturate both the WAN link to the central location as well as the nova storage pool. Red Hat recommends leaving this value set to true, and ensuring required images are available locally prior to launching an instance. For more information on making images available to the edge, see Section A.1.3, "Copying an image to a new site" . For images that are local, you can speed up the creation of VMs by using the tripleo_nova_image_cache.yml ansible playbook to pre-cache commonly used images or images that are likely to be deployed in the near future. 11.1. Running the tripleo_nova_image_cache.yml ansible playbook Prerequisites Authentication credentials to the correct API in the shell environment. Before the command provided in each step, you must ensure that the correct authentication file is sourced. Procedure Create an ansible inventory directory for your overcloud stacks: Create a list of image IDs that you want to pre-cache: Retrieve a comprehensive list of available images: Create an ansible playbook argument file called nova_cache_args.yml , and add the IDs of the images that you want to pre-cache: Run the tripleo_nova_image_cache.yml ansible playbook: 11.2. Performance considerations You can specify the number of images that you want to download concurrently with the ansible forks parameter, which defaults to a value of 5 . You can reduce the time to distribute this image by increasing the value of the forks parameter, however you must balance this with the increase in network and glance-api load. Use the --forks parameter to adjust concurrency as shown: 11.3. Optimizing the image distribution to DCN sites You can reduce WAN traffic by using a proxy for glance image distribution. When you configure a proxy: Glance images are downloaded to a single Compute node that acts as the proxy. The proxy redistributes the glance image to other Compute nodes in the inventory. You can place the following parameters in the nova_cache_args.yml ansible argument file to configure a proxy node. Set the tripleo_nova_image_cache_use_proxy parameter to true to enable the image cache proxy. The image proxy uses secure copy scp to distribute images to other nodes in the inventory. SCP is inefficient over networks with high latency, such as a WAN between DCN sites. Red Hat recommends that you limit the playbook target to a single DCN location, which correlates to a single stack. Use the tripleo_nova_image_cache_proxy_hostname parameter to select the image cache proxy. The default proxy is the first compute node in the ansible inventory file. Use the tripleo_nova_image_cache_plan parameter to limit the playbook inventory to a single site: 11.4. Configuring the nova-cache cleanup A background process runs periodically to remove images from the nova cache when both of the following conditions are true: The image is not in use by an instance. The age of the image is greater than the value for the nova parameter remove_unused_original_minimum_age_seconds . The default value for the remove_unused_original_minimum_age_seconds parameter is 86400 . The value is expressed in seconds and is equal to 24 hours. You can control this value with the NovaImageCachTTL tripleo-heat-templates parameter during the initial deployment, or during a stack update of your cloud: When you instruct the playbook to pre-cache an image that already exists on a Compute node, ansible does not report a change, but the age of the image is reset to 0. Run the ansible play more frequently than the value of the NovaImageCacheTTL parameter to maintain a cache of images. | [
"Build of instance 3c04e982-c1d1-4364-b6bd-f876e399325b aborted: Image 20c5ff9d-5f54-4b74-830f-88e78b9999ed is unacceptable: No image locations are accessible",
"'Image %s is not on my ceph and [workarounds]/ never_download_image_if_on_rbd=True; refusing to fetch and upload.',",
"mkdir inventories find ~/overcloud-deploy/*/config-download -name tripleo-ansible-inventory.yaml | while read f; do cp USDf inventories/USD(basename USD(dirname USDf)).yaml; done",
"source centralrc openstack image list +--------------------------------------+---------+--------+ | ID | Name | Status | +--------------------------------------+---------+--------+ | 07bc2424-753b-4f65-9da5-5a99d8383fe6 | image_0 | active | | d5187afa-c821-4f22-aa4b-4e76382bef86 | image_1 | active | +--------------------------------------+---------+--------+",
"--- tripleo_nova_image_cache_images: - id: 07bc2424-753b-4f65-9da5-5a99d8383fe6 - id: d5187afa-c821-4f22-aa4b-4e76382bef86",
"source centralrc ansible-playbook -i inventories --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml",
"ansible-playbook -i inventory.yaml --forks 10 --extra-vars \"@nova_cache_args.yml\" /usr/share/ansible/tripleo-playbooks/tripleo_nova_image_cache.yml",
"tripleo_nova_image_cache_use_proxy: true tripleo_nova_image_cache_proxy_hostname: dcn0-novacompute-1 tripleo_nova_image_cache_plan: dcn0",
"parameter_defaults: NovaImageCacheTTL: 604800 # Default to 7 days for all compute roles Compute2Parameters: NovaImageCacheTTL: 1209600 # Override to 14 days for the Compute2 compute role"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/precaching-glance-images-into-nova |
2.8.9.2.5. Target Options | 2.8.9.2.5. Target Options When a packet has matched a particular rule, the rule can direct the packet to a number of different targets which determine the appropriate action. Each chain has a default target, which is used if none of the rules on that chain match a packet or if none of the rules which match the packet specify a target. The following are the standard targets: <user-defined-chain> - A user-defined chain within the table. User-defined chain names must be unique. This target passes the packet to the specified chain. ACCEPT - Allows the packet through to its destination or to another chain. DROP - Drops the packet without responding to the requester. The system that sent the packet is not notified of the failure. QUEUE - The packet is queued for handling by a user-space application. RETURN - Stops checking the packet against rules in the current chain. If the packet with a RETURN target matches a rule in a chain called from another chain, the packet is returned to the first chain to resume rule checking where it left off. If the RETURN rule is used on a built-in chain and the packet cannot move up to its chain, the default target for the current chain is used. In addition, extensions are available which allow other targets to be specified. These extensions are called target modules or match option modules and most only apply to specific tables and situations. Refer to Section 2.8.9.2.4.4, "Additional Match Option Modules" for more information about match option modules. Many extended target modules exist, most of which only apply to specific tables or situations. Some of the most popular target modules included by default in Red Hat Enterprise Linux are: LOG - Logs all packets that match this rule. Because the packets are logged by the kernel, the /etc/syslog.conf file determines where these log entries are written. By default, they are placed in the /var/log/messages file. Additional options can be used after the LOG target to specify the way in which logging occurs: --log-level - Sets the priority level of a logging event. Refer to the syslog.conf man page for a list of priority levels. --log-ip-options - Logs any options set in the header of an IP packet. --log-prefix - Places a string of up to 29 characters before the log line when it is written. This is useful for writing syslog filters for use in conjunction with packet logging. Note Due to an issue with this option, you should add a trailing space to the log-prefix value. --log-tcp-options - Logs any options set in the header of a TCP packet. --log-tcp-sequence - Writes the TCP sequence number for the packet in the log. REJECT - Sends an error packet back to the remote system and drops the packet. The REJECT target accepts --reject-with <type> (where <type> is the rejection type) allowing more detailed information to be returned with the error packet. The message port-unreachable is the default error type given if no other option is used. Refer to the iptables man page for a full list of <type> options. Other target extensions, including several that are useful for IP masquerading using the nat table, or with packet alteration using the mangle table, can be found in the iptables man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-command_options_for_iptables-target_options |
Chapter 1. Overview | Chapter 1. Overview Ceph Object Gateway, also known as RADOS Gateway (RGW) is an object storage interface built on top of librados to provide applications with a RESTful gateway to Ceph storage clusters. Ceph object gateway supports two interfaces: S3-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. Swift-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API. The Ceph object gateway is a server for interacting with a Ceph storage cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph object gateway has its own user management. Ceph object gateway can store data in the same Ceph storage cluster used to store data from Ceph block device clients; however, it would involve separate pools and likely a different CRUSH hierarchy. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other. Warning Do not use RADOS snapshots on pools used by RGW. Doing so can introduce undesirable data inconsistencies. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_configuration_and_administration_guide/overview-rgw |
7.226. setroubleshoot | 7.226. setroubleshoot 7.226.1. RHBA-2013:0387 - setroubleshoot bug fix update Updated setroubleshoot packages that fix several bugs are now available for Red Hat Enterprise Linux 6. This package provides a set of analysis plugins for use with setroubleshoot. Each plugin has the capacity to analyze SELinux AVC (Access Vector Cache) data and system data to provide user friendly reports describing how to interpret SELinux AVC denial messages. Bug Fixes BZ# 788196 Prior to this update, the "sealert -a /var/log/audit/audit.log -H" command did not work correctly. When opening the audit.log file, the sealert utility returned an error when the "-H" option was used. The relevant source code has been modified and the "-H" sealert option is no longer recognized as a valid option. BZ#832143 Previously, SELinux Alert Browser did not display alerts even if SELinux denial messages were present. This was caused by the sedispatch utility, which did not handle audit messages correctly, and users were not able to fix their SELinux issues according to the SELinux alerts. Now, SELinux Alert Browser properly alerts the user in the described scenario. BZ#842445 Under certain circumstances, sealert produced the " 'tuple' object has no attribute 'split' " error message. A patch has been provided to fix this bug. As a result, sealert no longer returns this error message. BZ# 851824 The sealert utility returned parse error messages if an alert description contained parentheses. With this update, sealert has been fixed and now, the error messages are no longer returned in the described scenario. BZ# 864429 Previously, improper documentation content was present in files located in the /usr/share/doc/setroubleshoot/ directory. This update removes certain unneeded files and fixes content of others. Users of setroubleshoot are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/setroubleshoot |
Release Notes and Known Issues | Release Notes and Known Issues Red Hat CodeReady Workspaces 2.15 Release Notes and Known Issues for Red Hat CodeReady Workspaces 2.15 Robert Kratky [email protected] Fabrice Flore-Thebault [email protected] Jana Vrbkova [email protected] Max Leonov [email protected] Red Hat Developer Group Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/release_notes_and_known_issues/index |
Chapter 1. Issue: Auto-reboot during Argo CD sync with machine configurations | Chapter 1. Issue: Auto-reboot during Argo CD sync with machine configurations In the Red Hat OpenShift Container Platform, nodes are updated automatically through the Red Hat OpenShift Machine Config Operator (MCO). A Machine Config Operator (MCO) is a custom resource that is used by the cluster to manage the complete life cycle of its nodes. When an MCO resource is created or updated in a cluster, the MCO picks up the update, performs the necessary changes to the selected nodes, and restarts the nodes gracefully by cordoning, draining, and rebooting those nodes. It handles everything from the kernel to the kubelet. However, interactions between the MCO and the GitOps workflow can introduce major performance issues and other undesired behaviors. This section shows how to make the MCO and the Argo CD GitOps orchestration tool work well together. 1.1. Solution: Enhance performance in machine configurations and Argo CD When you are using a Machine Config Operator as part of a GitOps workflow, the following sequence can produce suboptimal performance: Argo CD starts an automated sync job after a commit to the Git repository that contains application resources. If Argo CD notices a new or an updated machine configuration while the sync operation is in process, MCO picks up the change to the machine configuration and starts rebooting the nodes to apply the change. If a rebooting node in the cluster contains the Argo CD application controller, the application controller terminates, and the application sync is aborted. As the MCO reboots the nodes in sequential order, and the Argo CD workloads can be rescheduled on each reboot, it can take some time for the sync to be completed. This results in an undefined behavior until the MCO has rebooted all nodes affected by the machine configurations within the sync. 1.2. Additional resources Preventing nodes from auto-rebooting during Argo CD sync with machine configs | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/troubleshooting_issues/auto-reboot-during-argo-cd-sync-with-machine-configurations |
probe::scheduler.kthread_stop.return | probe::scheduler.kthread_stop.return Name probe::scheduler.kthread_stop.return - A kthread is stopped and gets the return value Synopsis scheduler.kthread_stop.return Values return_value return value after stopping the thread name name of the probe point | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-kthread-stop-return |
API Documentation | API Documentation Red Hat JBoss Data Virtualization 6.4 API reference for developers. David Le Sage [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/api_documentation/index |
Chapter 5. Hibernate Services | Chapter 5. Hibernate Services 5.1. About Hibernate Services Services are classes that provide Hibernate with pluggable implementations of various types of functionality. Specifically they are implementations of certain service contract interfaces. The interface is known as the service role; the implementation class is known as the service implementation. Generally speaking, users can plug in alternate implementations of all standard service roles (overriding); they can also define additional services beyond the base set of service roles (extending). 5.2. About Service Contracts The basic requirement for a service is to implement the marker interface org.hibernate.service.Service. Hibernate uses this internally for some basic type safety. Optionally, the service can also implement the org.hibernate.service.spi.Startable and org.hibernate.service.spi.Stoppable interfaces to receive notifications of being started and stopped. Another optional service contract is org.hibernate.service.spi.Manageable which marks the service as manageable in Jakarta Management provided the Jakarta Management integration is enabled. 5.3. Types of Service Dependencies Services are allowed to declare dependencies on other services using either of the following approaches: @org.hibernate.service.spi.InjectService Any method on the service implementation class accepting a single parameter and annotated with @InjectService is considered requesting injection of another service. By default the type of the method parameter is expected to be the service role to be injected. If the parameter type is different than the service role, the serviceRole attribute of the InjectService should be used to explicitly name the role. By default injected services are considered required, that is the startup will fail if a named dependent service is missing. If the service to be injected is optional, the required attribute of the InjectService should be declared as false . The default is true . org.hibernate.service.spi.ServiceRegistryAwareService The second approach is a pull approach where the service implements the optional service interface org.hibernate.service.spi.ServiceRegistryAwareService which declares a single injectServices method. During startup, Hibernate will inject the org.hibernate.service.ServiceRegistry itself into services which implement this interface. The service can then use the ServiceRegistry reference to locate any additional services it needs. 5.3.1. The Service Registry 5.3.1.1. About the ServiceRegistry The central service API, aside from the services themselves, is the org.hibernate.service.ServiceRegistry interface. The main purpose of a service registry is to hold, manage and provide access to services. Service registries are hierarchical. Services in one registry can depend on and utilize services in that same registry as well as any parent registries. Use org.hibernate.service.ServiceRegistryBuilder to build a org.hibernate.service.ServiceRegistry instance. Example Using ServiceRegistryBuilder to Create a ServiceRegistry ServiceRegistryBuilder registryBuilder = new ServiceRegistryBuilder( bootstrapServiceRegistry ); ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry(); 5.3.2. Custom Services 5.3.2.1. About Custom Services Once a org.hibernate.service.ServiceRegistry is built it is considered immutable; the services themselves might accept reconfiguration, but immutability here means adding or replacing services. So another role provided by the org.hibernate.service.ServiceRegistryBuilder is to allow tweaking of the services that will be contained in the org.hibernate.service.ServiceRegistry generated from it. There are two means to tell a org.hibernate.service.ServiceRegistryBuilder about custom services. Implement a org.hibernate.service.spi.BasicServiceInitiator class to control on-demand construction of the service class and add it to the org.hibernate.service.ServiceRegistryBuilder using its addInitiator method. Just instantiate the service class and add it to the org.hibernate.service.ServiceRegistryBuilder using its addService method. Either approach is valid for extending a registry, such as adding new service roles, and overriding services, such as replacing service implementations. Example: Use ServiceRegistryBuilder to Replace an Existing Service with a Custom Service ServiceRegistryBuilder registryBuilder = new ServiceRegistryBuilder(bootstrapServiceRegistry); registryBuilder.addService(JdbcServices.class, new MyCustomJdbcService()); ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry(); public class MyCustomJdbcService implements JdbcServices{ @Override public ConnectionProvider getConnectionProvider() { return null; } @Override public Dialect getDialect() { return null; } @Override public SqlStatementLogger getSqlStatementLogger() { return null; } @Override public SqlExceptionHelper getSqlExceptionHelper() { return null; } @Override public ExtractedDatabaseMetaData getExtractedMetaDataSupport() { return null; } @Override public LobCreator getLobCreator(LobCreationContext lobCreationContext) { return null; } @Override public ResultSetWrapper getResultSetWrapper() { return null; } } 5.3.3. The Boot-Strap Registry 5.3.3.1. About the Boot-strap Registry The boot-strap registry holds services that absolutely have to be available for most things to work. The main service here is the ClassLoaderService which is a perfect example. Even resolving configuration files needs access to class loading services i.e. resource look ups. This is the root registry, no parent, in normal use. Instances of boot-strap registries are built using the org.hibernate.service.BootstrapServiceRegistryBuilder class. Using BootstrapServiceRegistryBuilder Example: Using BootstrapServiceRegistryBuilder BootstrapServiceRegistry bootstrapServiceRegistry = new BootstrapServiceRegistryBuilder() // pass in org.hibernate.integrator.spi.Integrator instances which are not // auto-discovered (for whatever reason) but which should be included .with(anExplicitIntegrator) // pass in a class loader that Hibernate should use to load application classes .with(anExplicitClassLoaderForApplicationClasses) // pass in a class loader that Hibernate should use to load resources .with(anExplicitClassLoaderForResources) // see BootstrapServiceRegistryBuilder for rest of available methods ... // finally, build the bootstrap registry with all the above options .build(); 5.3.3.2. BootstrapRegistry Services org.hibernate.service.classloading.spi.ClassLoaderService Hibernate needs to interact with class loaders. However, the manner in which Hibernate, or any library, should interact with class loaders varies based on the runtime environment that is hosting the application. Application servers, OSGi containers, and other modular class loading systems impose very specific class loading requirements. This service provides Hibernate an abstraction from this environmental complexity. And just as importantly, it does so in a single-swappable-component manner. In terms of interacting with a class loader, Hibernate needs the following capabilities: the ability to locate application classes the ability to locate integration classes the ability to locate resources, such as properties files and XML files the ability to load java.util.ServiceLoader Note Currently, the ability to load application classes and the ability to load integration classes are combined into a single load class capability on the service. That may change in a later release. org.hibernate.integrator.spi.IntegratorService Applications, add-ons and other modules need to integrate with Hibernate. The approach required a component, usually an application, to coordinate the registration of each individual module. This registration was conducted on behalf of each module's integrator. This service focuses on the discovery aspect. It leverages the standard java.util.ServiceLoader capability provided by the org.hibernate.service.classloading.spi.ClassLoaderService in order to discover implementations of the org.hibernate.integrator.spi.Integrator contract. Integrators would simply define a file named /META-INF/services/org.hibernate.integrator.spi.Integrator and make it available on the class path. This file is used by the java.util.ServiceLoader mechanism. It lists, one per line, the fully qualified names of classes which implement the org.hibernate.integrator.spi.Integrator interface. 5.3.4. SessionFactory Registry While it is best practice to treat instances of all the registry types as targeting a given org.hibernate.SessionFactory , the instances of services in this group explicitly belong to a single org.hibernate.SessionFactory . The difference is a matter of timing in when they need to be initiated. Generally they need access to the org.hibernate.SessionFactory to be initiated. This special registry is org.hibernate.service.spi.SessionFactoryServiceRegistry . 5.3.4.1. SessionFactory Services org.hibernate.event.service.spi.EventListenerRegistry Description Service for managing event listeners. Initiator org.hibernate.event.service.internal.EventListenerServiceInitiator Implementations org.hibernate.event.service.internal.EventListenerRegistryImpl 5.3.5. Integrators The org.hibernate.integrator.spi.Integrator is intended to provide a simple means for allowing developers to hook into the process of building a functioning SessionFactory . The org.hibernate.integrator.spi.Integrator interface defines two methods of interest: integrate allows us to hook into the building process disintegrate allows us to hook into a SessionFactory shutting down. Note There is a third method defined in org.hibernate.integrator.spi.Integrator , an overloaded form of integrate, accepting a org.hibernate.metamodel.source.MetadataImplementor instead of org.hibernate.cfg.Configuration . In addition to the discovery approach provided by the IntegratorService , applications can manually register Integrator implementations when building the BootstrapServiceRegistry . 5.3.5.1. Integrator Use Cases The main use cases for an org.hibernate.integrator.spi.Integrator are registering event listeners and providing services, see org.hibernate.integrator.spi.ServiceContributingIntegrator . Example: Registering Event Listeners public class MyIntegrator implements org.hibernate.integrator.spi.Integrator { public void integrate( Configuration configuration, SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) { // As you might expect, an EventListenerRegistry is the thing with which event listeners are registered It is a // service so we look it up using the service registry final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class); // If you wish to have custom determination and handling of "duplicate" listeners, you would have to add an // implementation of the org.hibernate.event.service.spi.DuplicationStrategy contract like this eventListenerRegistry.addDuplicationStrategy(myDuplicationStrategy); // EventListenerRegistry defines 3 ways to register listeners: // 1) This form overrides any existing registrations with eventListenerRegistry.setListeners(EventType.AUTO_FLUSH, myCompleteSetOfListeners); // 2) This form adds the specified listener(s) to the beginning of the listener chain eventListenerRegistry.prependListeners(EventType.AUTO_FLUSH, myListenersToBeCalledFirst); // 3) This form adds the specified listener(s) to the end of the listener chain eventListenerRegistry.appendListeners(EventType.AUTO_FLUSH, myListenersToBeCalledLast); } } | [
"ServiceRegistryBuilder registryBuilder = new ServiceRegistryBuilder( bootstrapServiceRegistry ); ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry();",
"ServiceRegistryBuilder registryBuilder = new ServiceRegistryBuilder(bootstrapServiceRegistry); registryBuilder.addService(JdbcServices.class, new MyCustomJdbcService()); ServiceRegistry serviceRegistry = registryBuilder.buildServiceRegistry(); public class MyCustomJdbcService implements JdbcServices{ @Override public ConnectionProvider getConnectionProvider() { return null; } @Override public Dialect getDialect() { return null; } @Override public SqlStatementLogger getSqlStatementLogger() { return null; } @Override public SqlExceptionHelper getSqlExceptionHelper() { return null; } @Override public ExtractedDatabaseMetaData getExtractedMetaDataSupport() { return null; } @Override public LobCreator getLobCreator(LobCreationContext lobCreationContext) { return null; } @Override public ResultSetWrapper getResultSetWrapper() { return null; } }",
"BootstrapServiceRegistry bootstrapServiceRegistry = new BootstrapServiceRegistryBuilder() // pass in org.hibernate.integrator.spi.Integrator instances which are not // auto-discovered (for whatever reason) but which should be included .with(anExplicitIntegrator) // pass in a class loader that Hibernate should use to load application classes .with(anExplicitClassLoaderForApplicationClasses) // pass in a class loader that Hibernate should use to load resources .with(anExplicitClassLoaderForResources) // see BootstrapServiceRegistryBuilder for rest of available methods // finally, build the bootstrap registry with all the above options .build();",
"public class MyIntegrator implements org.hibernate.integrator.spi.Integrator { public void integrate( Configuration configuration, SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) { // As you might expect, an EventListenerRegistry is the thing with which event listeners are registered It is a // service so we look it up using the service registry final EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class); // If you wish to have custom determination and handling of \"duplicate\" listeners, you would have to add an // implementation of the org.hibernate.event.service.spi.DuplicationStrategy contract like this eventListenerRegistry.addDuplicationStrategy(myDuplicationStrategy); // EventListenerRegistry defines 3 ways to register listeners: // 1) This form overrides any existing registrations with eventListenerRegistry.setListeners(EventType.AUTO_FLUSH, myCompleteSetOfListeners); // 2) This form adds the specified listener(s) to the beginning of the listener chain eventListenerRegistry.prependListeners(EventType.AUTO_FLUSH, myListenersToBeCalledFirst); // 3) This form adds the specified listener(s) to the end of the listener chain eventListenerRegistry.appendListeners(EventType.AUTO_FLUSH, myListenersToBeCalledLast); } }"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_hibernate_applications/hibernate_services |
Chapter 3. Configuring the Collector | Chapter 3. Configuring the Collector 3.1. Configuring the Collector The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file. 3.1.1. OpenTelemetry Collector configuration options The OpenTelemetry Collector consists of five types of components that access telemetry data: Receivers Processors Exporters Connectors Extensions You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need. Example of the OpenTelemetry Collector custom resource file apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus] 1 If a component is configured but not defined in the service section, the component is not enabled. Table 3.1. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger , prometheus , zipkin , kafka , opencensus None Processors run through the received data before it is exported. By default, no processors are enabled. batch , memory_limiter , resourcedetection , attributes , span , k8sattributes , filter , routing None An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. otlp , otlphttp , debug , prometheus , kafka None Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. spanmetrics None Optional components for tasks that do not involve processing telemetry data. bearertokenauth , oauth2client , jaegerremotesampling , pprof , health_check , memory_ballast , zpages None Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None You enable receivers for metrics by adding them under service.pipelines.metrics . None You enable processors for metircs by adding them under service.pipelines.metrics . None You enable exporters for metrics by adding them under service.pipelines.metrics . None 3.1.2. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 3.2. Receivers Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry: OTLP Receiver Jaeger Receiver Host Metrics Receiver Kubernetes Objects Receiver Kubelet Stats Receiver Prometheus Receiver OTLP JSON File Receiver Zipkin Receiver Kafka Receiver Kubernetes Cluster Receiver OpenCensus Receiver Filelog Receiver Journald Receiver Kubernetes Events Receiver 3.2.1. OTLP Receiver The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with an enabled OTLP Receiver # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp] # ... 1 The OTLP gRPC endpoint. If omitted, the default 0.0.0.0:4317 is used. 2 The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path to the TLS certificate at which the server verifies a client certificate. This sets the value of ClientCAs and ClientAuth to RequireAndVerifyClientCert in the TLSConfig . For more information, see the Config of the Golang TLS package . 4 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval field accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 5 The OTLP HTTP endpoint. The default value is 0.0.0.0:4318 . 6 The server-side TLS configuration. For more information, see the grpc protocol configuration section. 3.2.2. Jaeger Receiver The Jaeger Receiver ingests traces in the Jaeger formats. OpenTelemetry Collector custom resource with an enabled Jaeger Receiver # ... config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger] # ... 1 The Jaeger gRPC endpoint. If omitted, the default 0.0.0.0:14250 is used. 2 The Jaeger Thrift HTTP endpoint. If omitted, the default 0.0.0.0:14268 is used. 3 The Jaeger Thrift Compact endpoint. If omitted, the default 0.0.0.0:6831 is used. 4 The Jaeger Thrift Binary endpoint. If omitted, the default 0.0.0.0:6832 is used. 5 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.3. Host Metrics Receiver The Host Metrics Receiver ingests metrics in the OTLP format. OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> # ... --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics] # ... 1 Sets the time interval for host metrics collection. If omitted, the default value is 1m . 2 Sets the initial time delay for host metrics collection. If omitted, the default value is 1s . 3 Configures the root_path so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the same root_path value for each instance. 4 Lists the enabled host metrics scrapers. Available scrapers are cpu , disk , load , filesystem , memory , network , paging , processes , and process . 3.2.4. Kubernetes Objects Receiver The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data. Important The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - "" resources: - events - pods verbs: - get - list - watch - apiGroups: - "events.k8s.io" resources: - events verbs: - watch - list # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug] # ... 1 The Resource name that this receiver observes: for example, pods , deployments , or events . 2 The observation mode that this receiver uses: pull or watch . 3 Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is 1h . 4 The label selector to define targets. 5 The field selector to filter targets. 6 The list of namespaces to collect events from. If omitted, the default value is all . 3.2.5. Kubelet Stats Receiver The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet's API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis. OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver # ... config: receivers: kubeletstats: collection_interval: 20s auth_type: "serviceAccount" endpoint: "https://USD{env:K8S_NODE_NAME}:10250" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName # ... 1 Sets the K8S_NODE_NAME to authenticate to the API. The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector. Permissions required by the service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [""] resources: ["nodes/proxy"] 1 verbs: ["get"] # ... 1 The permissions required when using the extra_metadata_labels or request_utilization or limit_utilization metrics. 3.2.6. Prometheus Receiver The Prometheus Receiver scrapes the metrics endpoints. Important The Prometheus Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Prometheus Receiver # ... config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus] # ... 1 Scrapes configurations using the Prometheus format. 2 The Prometheus job name. 3 The lnterval for scraping the metrics data. Accepts time units. The default value is 1m . 4 The targets at which the metrics are exposed. This example scrapes the metrics from a my-app application in the example project. 3.2.7. OTLP JSON File Receiver The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process. Important The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver # ... config: otlpjsonfile: include: - "/var/log/*.log" 1 exclude: - "/var/log/test.log" 2 # ... 1 The list of file path glob patterns to watch. 2 The list of file path glob patterns to ignore. 3.2.8. Zipkin Receiver The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats. OpenTelemetry Collector custom resource with the enabled Zipkin Receiver # ... config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin] # ... 1 The Zipkin HTTP endpoint. If omitted, the default 0.0.0.0:9411 is used. 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.9. Kafka Receiver The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format. Important The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Receiver # ... config: receivers: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The default is otlp_spans . 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.2.10. Kubernetes Cluster Receiver The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts. Important The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver # ... config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug] # ... This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account. ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol # ... RBAC rules for the ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default # ... 3.2.11. OpenCensus Receiver The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json. OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver # ... config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus] # ... 1 The OpenCensus endpoint. If omitted, the default is 0.0.0.0:55678 . 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3 You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with * are accepted under the cors_allowed_origins . To match any origin, enter only * . 3.2.12. Filelog Receiver The Filelog Receiver tails and parses logs from files. Important The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file # ... config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev # ... 1 A list of file glob patterns that match the file paths to be read. 2 An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together. 3.2.13. Journald Receiver The Journald Receiver parses journald events from the systemd journal and sends them as logs. Important The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Journald Receiver apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: "false" pod-security.kubernetes.io/enforce: "privileged" pod-security.kubernetes.io/audit: "privileged" pod-security.kubernetes.io/warn: "privileged" # ... --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule # ... 1 Filters output by message priorities or priority ranges. The default value is info . 2 Lists the units to read entries from. If empty, entries are read from all units. 3 Includes very long logs and logs with unprintable characters. The default value is false . 4 If set to true , the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value is false . 5 The time interval to wait after the first failure before retrying. The default value is 1s . The units are ms , s , m , h . 6 The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is 30s . The supported units are ms , s , m , h . 7 The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is 0 , retrying never stops. The default value is 5m . The supported units are ms , s , m , h . 3.2.14. Kubernetes Events Receiver The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs. Important The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Kubernetes Events Receiver apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver # ... serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events] # ... 1 The service account of the Collector that has the required ClusterRole otel-collector RBAC. 2 The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected. 3.2.15. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.3. Processors Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry: Batch Processor Memory Limiter Processor Resource Detection Processor Attributes Processor Resource Processor Span Processor Kubernetes Attributes Processor Filter Processor Routing Processor Cumulative-to-Delta Processor Group-by-Attributes Processor Transform Processor 3.3.1. Batch Processor The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information. Example of the OpenTelemetry Collector custom resource when using the Batch Processor # ... config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.2. Parameters used by the Batch Processor Parameter Description Default timeout Sends the batch after a specific time duration and irrespective of the batch size. 200ms send_batch_size Sends the batch of telemetry data after the specified number of spans or metrics. 8192 send_batch_max_size The maximum allowable size of the batch. Must be equal or greater than the send_batch_size . 0 metadata_keys When activated, a batcher instance is created for each unique set of values found in the client.Metadata . [] metadata_cardinality_limit When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process. 1000 3.3.2. Memory Limiter Processor The Memory Limiter Processor periodically checks the Collector's memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run. Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor # ... config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.3. Parameters used by the Memory Limiter Processor Parameter Description Default check_interval Time between memory usage measurements. The optimal value is 1s . For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib . 0s limit_mib The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. 0 spike_limit_mib Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib . To calculate the soft limit, subtract the spike_limit_mib from the limit_mib . 20% of limit_mib limit_percentage Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting. 0 spike_limit_percentage Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting. 0 3.3.3. Resource Detection Processor The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry's resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector. Important The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Resource Detection Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] # ... OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection] # ... OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector # ... config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false # ... 1 Specifies which detector to use. In this example, the environment detector is specified. 3.3.4. Attributes Processor The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions. Important The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported: Insert Inserts a new attribute into the input data when the specified key does not already exist. Update Updates an attribute in the input data if the key already exists. Upsert Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists. Delete Removes an attribute from the input data. Hash Hashes an existing attribute value as SHA1. Extract Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor's to_attributes setting with the existing attribute as the source. Convert Converts an existing attribute to a specified type. OpenTelemetry Collector using the Attributes Processor # ... config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int # ... 3.3.5. Resource Processor The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs. Important The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: attributes: - key: cloud.availability_zone value: "zone-1" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete # ... Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute. 3.3.6. Span Processor The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces. Span renaming requires specifying attributes for the new name by using the from_attributes configuration. Important The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Span Processor for renaming a span # ... config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2 # ... 1 Defines the keys to form the new span name. 2 An optional separator. You can use this processor to extract attributes from the span name. OpenTelemetry Collector using the Span Processor for extracting attributes from a span name # ... config: processors: span/to_attributes: name: to_attributes: rules: - ^\/api\/v1\/document\/(?P<documentId>.*)\/updateUSD 1 # ... 1 This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update , this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span. You can have the span status modified. OpenTelemetry Collector using the Span Processor for status change # ... config: processors: span/set_status: status: code: Error description: "<error_description>" # ... 3.3.7. Kubernetes Attributes Processor The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata. Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list'] # ... OpenTelemetry Collector using the Kubernetes Attributes Processor # ... config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME # ... 3.3.8. Filter Processor The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs. Important The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes["container.name"] == "app_container_1"' 2 - 'resource.attributes["host.name"] == "localhost"' 3 # ... 1 Defines the error mode. When set to ignore , ignores errors returned by conditions. When set to propagate , returns the error up the pipeline. An error causes the payload to be dropped from the Collector. 2 Filters the spans that have the container.name == app_container_1 attribute. 3 Filters the spans that have the host.name == localhost resource attribute. 3.3.9. Routing Processor The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value. Important The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250 # ... 1 The HTTP header name for the lookup value when performing the route. 2 The default exporter when the attribute value is not present in the table in the section. 3 The table that defines which values are to be routed to which exporters. Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes. 3.3.10. Cumulative-to-Delta Processor The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching. This processor does not convert non-monotonic sums and exponential histograms. Important The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor # ... config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - "<regular_expression_for_metric_names>" # ... 1 Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics. 2 Defines a value provided in the metrics field as a strict exact match or regexp regular expression. 3 Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence. 4 Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. 3.3.11. Group-by-Attributes Processor The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes. Important The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example: # ... config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2> # ... 1 Specifies attribute keys to group by. 2 If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. 3.3.12. Transform Processor The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL) . For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed. All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements. Important The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Configuration summary # ... config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string> # ... 1 Optional: See the following table "Values for the optional error_mode field". 2 Indicates a signal to be transformed. 3 See the following table "Values for the context field". 4 Optional: Conditions for performing a transformation. Configuration example # ... config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) 2 - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes["http.path"] == "/health" - set(name, attributes["http.route"]) - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}") - limit(attributes, 100, []) - truncate_all(attributes, 4096) # ... 1 Transforms a trace signal. 2 Keeps keys on the resources. 3 Replaces attributes and replaces string characters in password fields with asterisks. 4 Performs transformations at the span level. Table 3.4. Values for the context field Signal Statement Valid Contexts trace_statements resource , scope , span , spanevent metric_statements resource , scope , metric , datapoint log_statements resource , scope , log Table 3.5. Values for the optional error_mode field Value Description ignore Ignores and logs errors returned by statements and then continues to the statement. silent Ignores and doesn't log errors returned by statements and then continues to the statement. propagate Returns errors up the pipeline and drops the payload. Implicit default. 3.3.13. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.4. Exporters Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry: OTLP Exporter OTLP HTTP Exporter Debug Exporter Load Balancing Exporter Prometheus Exporter Prometheus Remote Write Exporter Kafka Exporter AWS CloudWatch Exporter AWS EMF Exporter AWS X-Ray Exporter File Exporter 3.4.1. OTLP Exporter The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: "dev" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp] # ... 1 The OTLP gRPC endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client-side TLS configuration. Defines paths to TLS certificates. 3 Disables client transport security when set to true . The default value is false by default. 4 Skips verifying the certificate when set to true . The default value is false . 5 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 6 Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing. 7 Headers are sent for every request performed during an established connection. 3.4.2. OTLP HTTP Exporter The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: "dev" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp] # ... 1 The OTLP HTTP endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client side TLS configuration. Defines paths to TLS certificates. 3 Headers are sent in every HTTP request. 4 If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request. 3.4.3. Debug Exporter The Debug Exporter prints traces and metrics to the standard output. OpenTelemetry Collector custom resource with the enabled Debug Exporter # ... config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] # ... 1 Verbosity of the debug export: detailed , normal , or basic . When set to detailed , pipeline data are verbosely logged. Defaults to normal . 2 Initial number of messages logged per second. The default value is 2 messages per second. 3 Sampling rate after the initial number of messages, the value in sampling_initial , has been logged. Disabled by default with the default 1 value. Sampling is enabled with values greater than 1 . For more information, see the page for the sampler function in the zapcore package on the Go Project's website. 4 When set to true , enables output from the Collector's internal logger for the exporter. 3.4.4. Load Balancing Exporter The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration. Important The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter # ... config: exporters: loadbalancing: routing_key: "service" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317 # ... 1 The routing_key: service exports spans for the same service name to the same Collector instance to provide accurate aggregation. The routing_key: traceID exports spans based on their traceID . The implicit default is traceID based routing. 2 The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported. 3 You can configure only one resolver. 4 The static resolver distributes the load across the listed endpoints. 5 You can use the DNS resolver only with a Kubernetes headless service. 6 The Kubernetes resolver is recommended. 3.4.5. Prometheus Exporter The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats. Important The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Exporter # ... config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus] # ... 1 The network endpoint where the metrics are exposed. The Red Hat build of OpenTelemetry Operator automatically exposes the port specified in the endpoint field to the <instance_name>-collector service. 2 The server-side TLS configuration. Defines paths to TLS certificates. 3 If set, exports metrics under the provided value. 4 Key-value pair labels that are applied for every exported metric. 5 If true , metrics are exported by using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such as counter . Disabled by default. 6 If enabled is true , all the resource attributes are converted to metric labels. Disabled by default. 7 Defines how long metrics are exposed without updates. The default is 5m . 8 Adds the metrics types and units suffixes. Must be disabled if the monitor tab in the Jaeger console is enabled. The default is true . Note When the spec.observability.metrics.enableMetrics field in the OpenTelemetryCollector custom resource (CR) is set to true , the OpenTelemetryCollector CR automatically creates a Prometheus ServiceMonitor or PodMonitor CR to enable Prometheus to scrape your metrics. 3.4.6. Prometheus Remote Write Exporter The Prometheus Remote Write Exporter exports metrics to compatible back ends. Important The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter # ... config: exporters: prometheusremotewrite: endpoint: "https://my-prometheus:7900/api/v1/push" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite] # ... 1 Endpoint for sending the metrics. 2 Server-side TLS configuration. Defines paths to TLS certificates. 3 When set to true , creates a target_info metric for each resource metric. 4 When set to true , exports a _created metric for the Summary, Histogram, and Monotonic Sum metric points. 5 Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is 3000000 , which is approximately 2.861 megabytes. Warning This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics. You must enable the --web.enable-remote-write-receiver feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails. 3.4.7. Kafka Exporter The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency. Important The Kafka Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Exporter # ... config: exporters: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The following are the defaults: otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs. 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.4.8. AWS CloudWatch Logs Exporter The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter # ... config: exporters: awscloudwatchlogs: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5 # ... 1 Required. If the log group does not exist yet, it is automatically created. 2 Required. If the log stream does not exist yet, it is automatically created. 3 Optional. If the AWS region is not already set in the default credential chain, you must specify it. 4 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 5 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . Additional resources What is Amazon CloudWatch Logs? (Amazon CloudWatch Logs User Guide) Specifying Credentials (AWS SDK for Go Developer Guide) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.9. AWS EMF Exporter The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF): Int64DataPoints DoubleDataPoints SummaryDataPoints The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API. One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/ . Important The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter # ... config: exporters: awsemf: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7 # ... 1 Customized log group name. 2 Customized log stream name. 3 Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default. 4 The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region. 5 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 6 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . 7 Optional. A custom namespace for the Amazon CloudWatch metrics. Log group name The log_group_name parameter allows you to customize the log group name and supports the default /metrics/default value or the following placeholders: /aws/metrics/{ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute in the metrics data and replace it with the actual cluster name. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Log stream name The log_stream_name parameter allows you to customize the log stream name and supports the default otel-stream value or the following placeholders: {ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute. {ContainerInstanceId} This placeholder is used to search for the ContainerInstanceId or aws.ecs.container.instance.id resource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskDefinitionFamily} This placeholder is used to search for the TaskDefinitionFamily or aws.ecs.task.family resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute in the metrics data and replace it with the actual task ID. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Additional resources Specification: Embedded metric format (Amazon CloudWatch User Guide) PutLogEvents (Amazon CloudWatch Logs API Reference) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.10. AWS X-Ray Exporter The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter # ... config: exporters: awsxray: region: "<region>" 1 endpoint: <endpoint> 2 resource_arn: "<aws_resource_arn>" 3 role_arn: "<iam_role>" 4 indexed_attributes: [ "<indexed_attr_0>", "<indexed_attr_1>" ] 5 aws_log_groups: ["<group1>", "<group2>"] 6 request_timeout_seconds: 120 7 # ... 1 The destination region for the X-Ray segments sent to the AWS X-Ray service. For example, eu-west-1 . 2 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 3 The Amazon Resource Name (ARN) of the AWS resource that is running the Collector. 4 The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account. 5 The list of attribute names to be converted to X-Ray annotations. 6 The list of log group names for Amazon CloudWatch Logs. 7 Time duration in seconds before timing out a request. If omitted, the default value is 30 . Additional resources What is AWS X-Ray? (AWS X-Ray Developer Guide) AWS SDK for Go API Reference (AWS Documentation) Specifying Credentials (AWS SDK for Go Developer Guide) IAM roles (AWS Identity and Access Management User Guide) 3.4.11. File Exporter The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path , which specifies the destination path for telemetry files in the persistent-volume file system. Important The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled File Exporter # ... config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9 # ... 1 The file-system path where the data is to be written. There is no default. 2 File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the rotation setting to enable file rotation. 3 The max_megabytes setting is the maximum size a file is allowed to reach until it is rotated. The default is 100 . 4 The max_days setting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. 5 The max_backups setting is for retaining several older files. The defalt is 100 . 6 The localtime setting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). 7 The format for encoding the telemetry data before writing it to a file. The default format is json . The proto format is also supported. 8 File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the zstd compression algorithm is supported. There is no default. 9 The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the rotation settings. 3.4.12. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.5. Connectors A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry: Count Connector Routing Connector Forward Connector Spanmetrics Connector 3.5.1. Count Connector The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines. Important The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following are the default metric names: trace.span.count trace.span.event.count metric.count metric.datapoint.count log.record.count You can also expose custom metric names. OpenTelemetry Collector custom resource (CR) with an enabled Count Connector # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus] # ... 1 It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter. 2 The Count Connector is configured to receive spans as an exporter. 3 The Count Connector is configured to emit generated metrics as a receiver. Tip If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data. The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions # ... config: connectors: count: spans: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" conditions: - 'attributes["env"] == "dev"' - 'name == "devevent"' # ... 1 In this example, the exposed metric counts spans with the specified conditions. 2 You can specify a custom metric name such as cluster.prod.event.count . Tip Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors. The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes. Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes # ... config: connectors: count: logs: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" attributes: - key: env default_value: unknown 3 # ... 1 Specifies attributes for logs. 2 You can specify a custom metric name such as my.log.count . 3 Defines a default value when the attribute is not set. 3.5.2. Routing Connector The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements. Important The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Routing Connector # ... config: connectors: routing: table: 1 - statement: route() where attributes["X-Tenant"] == "dev" 2 pipelines: [traces/dev] 3 - statement: route() where attributes["X-Tenant"] == "prod" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod] # ... 1 Connector routing table. 2 Routing conditions written as OTTL statements. 3 Destination pipelines for routing the matching telemetry data. 4 Destination pipelines for routing the telemetry data for which no routing condition is satisfied. 5 Error-handling mode: The propagate value is for logging an error and dropping the payload. The ignore value is for ignoring the condition and attempting to match with the one. The silent value is the same as ignore but without logging the error. The default is propagate . 6 When set to true , the payload is routed only to the first pipeline whose routing condition is met. The default is false . 3.5.3. Forward Connector The Forward Connector merges two pipelines of the same type. Important The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Forward Connector # ... config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp] # ... 3.5.4. Spanmetrics Connector The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data. OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector # ... config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics] # ... 1 Defines the flush interval of the generated metrics. Defaults to 15s . 3.5.5. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.6. Extensions Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry: BearerTokenAuth Extension OAuth2Client Extension File Storage Extension OIDC Auth Extension Jaeger Remote Sampling Extension Performance Profiler Extension Health Check Extension zPages Extension 3.6.1. BearerTokenAuth Extension The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs. OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension # ... config: extensions: bearertokenauth: scheme: "Bearer" 1 token: "<token>" 2 filename: "<token_file>" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 You can configure the BearerTokenAuth Extension to send a custom scheme . The default is Bearer . 2 You can add the BearerTokenAuth Extension token as metadata to identify a message. 3 Path to a file that contains an authorization token that is transmitted with every message. 4 You can assign the authenticator configuration to an OTLP Receiver. 5 You can assign the authenticator configuration to an OTLP Exporter. 3.6.2. OAuth2Client Extension The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs. Important The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension # ... config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: ["api.metrics"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Client identifier, which is provided by the identity provider. 2 Confidential key used to authenticate the client to the identity provider. 3 Further metadata, in the key-value pair format, which is transferred during authentication. For example, audience specifies the intended audience for the access token, indicating the recipient of the token. 4 The URL of the OAuth2 token endpoint, where the Collector requests access tokens. 5 The scopes define the specific permissions or access levels requested by the client. 6 The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens. 7 When set to true , configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. 8 The path to a Certificate Authority (CA) file that is used to verify the server's certificate during the TLS handshake. 9 The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required. 10 The path to the client's private key file that is used with the client certificate if needed for authentication. 11 Sets a timeout for the token client's request. 12 You can assign the authenticator configuration to an OTLP exporter. 3.6.3. File Storage Extension The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist. Important The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue # ... config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Specifies the directory in which the telemetry data is stored. 2 Specifies the timeout time interval for opening the stored files. 3 Starts compaction when the Collector starts. If omitted, the default is false . 4 Specifies the directory in which the compactor stores the telemetry data. 5 Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is 65536 bytes. 6 When set, forces the database to perform an fsync call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. 7 Buffers the OTLP Exporter data on the local file system. 8 Starts the File Storage Extension by the Collector. 3.6.4. OIDC Auth Extension The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request. Important The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured OIDC Auth Extension # ... config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The name of the header that contains the ID token. The default name is authorization . 2 The base URL of the OIDC provider. 3 Optional: The path to the issuer's CA certificate. 4 The audience for the token. 5 The name of the claim that contains the username. The default name is sub . 3.6.5. Jaeger Remote Sampling Extension The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger's remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system. Important The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension # ... config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The time interval at which the sampling configuration is updated. 2 The endpoint for reaching the Jaeger remote sampling strategy provider. 3 The path to a local file that contains a sampling strategy configuration in the JSON format. Example of a Jaeger Remote Sampling strategy file { "service_strategies": [ { "service": "foo", "type": "probabilistic", "param": 0.8, "operation_strategies": [ { "operation": "op1", "type": "probabilistic", "param": 0.2 }, { "operation": "op2", "type": "probabilistic", "param": 0.4 } ] }, { "service": "bar", "type": "ratelimiting", "param": 5 } ], "default_strategy": { "type": "probabilistic", "param": 0.5, "operation_strategies": [ { "operation": "/health", "type": "probabilistic", "param": 0.0 }, { "operation": "/metrics", "type": "probabilistic", "param": 0.0 } ] } } 3.6.6. Performance Profiler Extension The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service. Important The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Performance Profiler Extension # ... config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The endpoint at which this extension listens. Use localhost: to make it available only locally or ":" to make it available on all network interfaces. The default value is localhost:1777 . 2 Sets a fraction of blocking events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 3 Set a fraction of mutex contention events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 4 The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated. 3.6.7. Health Check Extension The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift. Important The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Health Check Extension # ... config: extensions: health_check: endpoint: "0.0.0.0:13133" 1 tls: 2 ca_file: "/path/to/ca.crt" cert_file: "/path/to/cert.crt" key_file: "/path/to/key.key" path: "/health/status" 3 check_collector_pipeline: 4 enabled: true 5 interval: "5m" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The target IP address for publishing the health check status. The default is 0.0.0.0:13133 . 2 The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path for the health check server. The default is / . 4 Settings for the Collector pipeline health check. 5 Enables the Collector pipeline health check. The default is false . 6 The time interval for checking the number of failures. The default is 5m . 7 The threshold of multiple failures until which a container is still marked as healthy. The default is 5 . 3.6.8. zPages Extension The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint. Important The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured zPages Extension # ... config: extensions: zpages: endpoint: "localhost:55679" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 Specifies the HTTP endpoint for serving the zPages extension. The default is localhost:55679 . Important Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route. You can enable port-forwarding by running the following oc command: USD oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679 The Collector provides the following zPages for diagnostics: ServiceZ Shows an overview of the Collector services and links to the following zPages: PipelineZ , ExtensionZ , and FeatureZ . This page also displays information about the build version and runtime. An example of this page's URL is http://localhost:55679/debug/servicez . PipelineZ Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page's URL is http://localhost:55679/debug/pipelinez . ExtensionZ Shows the currently active extensions in the Collector. An example of this page's URL is http://localhost:55679/debug/extensionz . FeatureZ Shows the feature gates enabled in the Collector along with their status and description. An example of this page's URL is http://localhost:55679/debug/featurez . TraceZ Shows spans categorized by latency. Available time ranges include 0 ms, 10 ms, 100 ms, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page's URL is http://localhost:55679/debug/tracez . 3.6.9. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.7. Target Allocator The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service. Important The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example OpenTelemetryCollector CR with the enabled Target Allocator apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] # ... 1 When the Target Allocator is enabled, the deployment mode must be set to statefulset . 2 Enables the Target Allocator. Defaults to false . 3 The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the ServiceMonitor , PodMonitor custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is <collector_name>-targetallocator . 4 Enables integration with the Prometheus PodMonitor and ServiceMonitor custom resources. 5 Label selector for the Prometheus ServiceMonitor custom resources. When left empty, enables all service monitors. 6 Label selector for the Prometheus PodMonitor custom resources. When left empty, enables all pod monitors. 7 Prometheus receiver with the minimal, empty scrape_config: [] configuration option. The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration. RBAC configuration for the Target Allocator service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [""] resources: - services - pods - namespaces verbs: ["get", "list", "watch"] - apiGroups: ["monitoring.coreos.com"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: ["get", "list", "watch"] - apiGroups: ["discovery.k8s.io"] resources: - endpointslices verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2 # ... 1 The name of the Target Allocator service account mane. 2 The namespace of the Target Allocator service account. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"config: exporters: prometheus: endpoint: 0.0.0.0:8889 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 3 const_labels: 4 label1: value1 enable_open_metrics: true 5 resource_to_telemetry_conversion: 6 enabled: true metric_expiration: 180m 7 add_metric_suffixes: false 8 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/configuring-the-collector |
Chapter 8. Uninstalling a cluster on Alibaba Cloud | Chapter 8. Uninstalling a cluster on Alibaba Cloud You can remove a cluster that you deployed to Alibaba Cloud. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_alibaba/uninstalling-cluster-alibaba |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_data_grid_with_spring/making-open-source-more-inclusive_datagrid |
Chapter 35. Getting started with Multipath TCP | Chapter 35. Getting started with Multipath TCP Transmission Control Protocol (TCP) ensures reliable delivery of the data through the internet and automatically adjusts its bandwidth in response to network load. Multipath TCP (MPTCP) is an extension to the original TCP protocol (single-path). MPTCP enables a transport connection to operate across multiple paths simultaneously, and brings network connection redundancy to user endpoint devices. 35.1. Understanding MPTCP The Multipath TCP (MPTCP) protocol allows for simultaneous usage of multiple paths between connection endpoints. The protocol design improves connection stability and also brings other benefits compared to the single-path TCP. Note In MPTCP terminology, links are considered as paths. The following are some of the advantages of using MPTCP: It allows a connection to simultaneously use multiple network interfaces. In case a connection is bound to a link speed, the usage of multiple links can increase the connection throughput. Note, that in case of the connection is bound to a CPU, the usage of multiple links causes the connection slowdown. It increases the resilience to link failures. For more details about MPTCP, review the Additional resources . Additional resources Understanding Multipath TCP: High availability for endpoints and the networking highway of the future RFC8684: TCP Extensions for Multipath Operation with Multiple Addresses Multipath TCP on Red Hat Enterprise Linux 8.3: From 0 to 1 subflows 35.2. Preparing RHEL to enable MPTCP support By default the MPTCP support is disabled in RHEL. Enable MPTCP so that applications that support this feature can use it. Additionally, you have to configure user space applications to force use MPTCP sockets if those applications have TCP sockets by default. You can use the sysctl utility to enable MPTCP support and prepare RHEL for enabling MPTCP for applications system-wide using a SystemTap script. Prerequisites The following packages are installed: systemtap iperf3 Procedure Enable MPTCP sockets in the kernel: Verify that MPTCP is enabled in the kernel: Create a mptcp-app.stap file with the following content: #!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* RSI contains 'type' and RDX contains 'protocol'. */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function("__sys_socket") { if (mptcpify() == 1) { printf("command %16s mptcpified\n", execname()); } } Force user space applications to create MPTCP sockets instead of TCP ones: Note: This operation affects all TCP sockets which are started after the command. The applications will continue using TCP sockets after you interrupt the command above with Ctrl + C . Alternatively, to allow MPTCP usage to only specific application, you can modify the mptcp-app.stap file with the following content: #!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* according to [1], RSI contains 'type' and RDX * contains 'protocol'. * [1] https://github.com/torvalds/linux/blob/master/arch/x86/entry/entry_64.S#L79 */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function("__sys_socket") { cur_proc = execname() if ((cur_proc == @1) && (mptcpify() == 1)) { printf("command %16s mptcpified\n", cur_proc); } } In case of alternative choice, assuming, you want to force the iperf3 tool to use MPTCP instead of TCP. To do so, enter the following command: After the mptcp-app.stap script installs the kernel probe, the following warnings appear in the kernel dmesg output Start the iperf3 server: Connect the client to the server: After the connection is established, verify the ss output to see the subflow-specific status: Verify MPTCP counters: Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase) tcp(7) and mptcpize(8) man pages on your system 35.3. Using iproute2 to temporarily configure and enable multiple paths for MPTCP applications Each MPTCP connection uses a single subflow similar to plain TCP. To get the MPTCP benefits, specify a higher limit for maximum number of subflows for each MPTCP connection. Then configure additional endpoints to create those subflows. Important The configuration in this procedure will not persist after rebooting your machine. Note that MPTCP does not yet support mixed IPv6 and IPv4 endpoints for the same socket. Use endpoints belonging to the same address family. Prerequisites The iperf3 package is installed Server network interface settings: enp4s0: 192.0.2.1/24 enp1s0: 198.51.100.1/24 Client network interface settings: enp4s0f0: 192.0.2.2/24 enp4s0f1: 198.51.100.2/24 Procedure Configure the client to accept up to 1 additional remote address, as provided by the server: Add IP address 198.51.100.1 as a new MPTCP endpoint on the server: The signal option ensures that the ADD_ADDR packet is sent after the three-way-handshake. Start the iperf3 server: Connect the client to the server: Verification Verify the connection is established: Verify the connection and IP address limit: Verify the newly added endpoint: Verify MPTCP counters by using the nstat MPTcp* command on a server: Additional resources mptcpize(8) and ip-mptcp(8) man pages on your system 35.4. Permanently configuring multiple paths for MPTCP applications You can configure MultiPath TCP (MPTCP) using the nmcli command to permanently establish multiple subflows between a source and destination system. The subflows can use different resources, different routes to the destination, and even different networks. Such as Ethernet, cellular, wifi, and so on. As a result, you achieve combined connections, which increase network resilience and throughput. The server uses the following network interfaces in our example: enp4s0: 192.0.2.1/24 enp1s0: 198.51.100.1/24 enp7s0: 192.0.2.3/24 The client uses the following network interfaces in our example: enp4s0f0: 192.0.2.2/24 enp4s0f1: 198.51.100.2/24 enp6s0: 192.0.2.5/24 Prerequisites You configured the default gateway on the relevant interfaces. Procedure Enable MPTCP sockets in the kernel: Optional: The RHEL kernel default for subflow limit is 2. If you require more: Create the /etc/systemd/system/set_mptcp_limit.service file with the following content: The oneshot unit executes the ip mptcp limits set subflows 3 command after your network ( network.target ) is operational during every boot process. The ip mptcp limits set subflows 3 command sets the maximum number of additional subflows for each connection, so 4 in total. It is possible to add maximally 3 additional subflows. Enable the set_mptcp_limit service: Enable MPTCP on all connection profiles that you want to use for connection aggregation: The connection.mptcp-flags parameter configures MPTCP endpoints and the IP address flags. If MPTCP is enabled in a NetworkManager connection profile, the setting will configure the IP addresses of the relevant network interface as MPTCP endpoints. By default, NetworkManager does not add MPTCP flags to IP addresses if there is no default gateway. If you want to bypass that check, you need to use the also-without-default-route flag. Verification Verify that you enabled the MPTCP kernel parameter: Verify that you set the subflow limit correctly, in case the default was not enough: Verify that you configured the per-address MPTCP setting correctly: Additional resources nm-settings-nmcli(5) ip-mptcp(8) Section 35.1, "Understanding MPTCP" Understanding Multipath TCP: High availability for endpoints and the networking highway of the future RFC8684: TCP Extensions for Multipath Operation with Multiple Addresses Using Multipath TCP to better survive outages and increase bandwidth 35.5. Monitoring MPTCP sub-flows The life cycle of a multipath TCP (MPTCP) socket can be complex: The main MPTCP socket is created, the MPTCP path is validated, one or more sub-flows are created and eventually removed. Finally, the MPTCP socket is terminated. The MPTCP protocol allows monitoring MPTCP-specific events related to socket and sub-flow creation and deletion, using the ip utility provided by the iproute package. This utility uses the netlink interface to monitor MPTCP events. This procedure demonstrates how to monitor MPTCP events. For that, it simulates a MPTCP server application, and a client connects to this service. The involved clients in this example use the following interfaces and IP addresses: Server: 192.0.2.1 Client (Ethernet connection): 192.0.2.2 Client (WiFi connection): 192.0.2.3 To simplify this example, all interfaces are within the same subnet. This is not a requirement. However, it is important that routing has been configured correctly, and the client can reach the server via both interfaces. Prerequisites A RHEL client with two network interfaces, such as a laptop with Ethernet and WiFi The client can connect to the server via both interfaces A RHEL server Both the client and the server run RHEL 8.6 or later Procedure Set the per connection additional subflow limits to 1 on both client and server: On the server, to simulate a MPTCP server application, start netcat ( nc ) in listen mode with enforced MPTCP sockets instead of TCP sockets: The -k option causes that nc does not close the listener after the first accepted connection. This is required to demonstrate the monitoring of sub-flows. On the client: Identify the interface with the lowest metric: The enp1s0 interface has a lower metric than wlp1s0 . Therefore, RHEL uses enp1s0 by default. On the first terminal, start the monitoring: On the second terminal, start a MPTCP connection to the server: RHEL uses the enp1s0 interface and its associated IP address as a source for this connection. On the monitoring terminal, the ip mptcp monitor command now logs: The token identifies the MPTCP socket as an unique ID, and later it enables you to correlate MPTCP events on the same socket. On the terminal with the running nc connection to the server, press Enter . This first data packet fully establishes the connection. Note that, as long as no data has been sent, the connection is not established. On the monitoring terminal, ip mptcp monitor now logs: Optional: Display the connections to port 12345 on the server: At this point, only one connection to the server has been established. On a third terminal, create another endpoint: This command sets the name and IP address of the WiFi interface of the client in this command. On the monitoring terminal, ip mptcp monitor now logs: The locid field displays the local address ID of the new sub-flow and identifies this sub-flow even if the connection uses network address translation (NAT). The saddr4 field matches the endpoint's IP address from the ip mptcp endpoint add command. Optional: Display the connections to port 12345 on the server: The command now displays two connections: The connection with source address 192.0.2.2 corresponds to the first MPTCP sub-flow that you established previously. The connection from the sub-flow over the wlp1s0 interface with source address 192.0.2.3 . On the third terminal, delete the endpoint: Use the ID from the locid field from the ip mptcp monitor output, or retrieve the endpoint ID using the ip mptcp endpoint show command. On the monitoring terminal, ip mptcp monitor now logs: On the first terminal with the nc client, press Ctrl + C to terminate the session. On the monitoring terminal, ip mptcp monitor now logs: Additional resources ip-mptcp(1) man page on your system How NetworkManager manages multiple default gateways 35.6. Disabling Multipath TCP in the kernel You can explicitly disable the MPTCP option in the kernel. Procedure Disable the mptcp.enabled option. Verification Verify whether the mptcp.enabled is disabled in the kernel. | [
"echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf",
"sysctl -a | grep mptcp.enabled net.mptcp.enabled = 1",
"#!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* RSI contains 'type' and RDX contains 'protocol'. */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function(\"__sys_socket\") { if (mptcpify() == 1) { printf(\"command %16s mptcpified\\n\", execname()); } }",
"stap -vg mptcp-app.stap",
"#!/usr/bin/env stap %{ #include <linux/in.h> #include <linux/ip.h> %} /* according to [1], RSI contains 'type' and RDX * contains 'protocol'. * [1] https://github.com/torvalds/linux/blob/master/arch/x86/entry/entry_64.S#L79 */ function mptcpify () %{ if (CONTEXT->kregs->si == SOCK_STREAM && (CONTEXT->kregs->dx == IPPROTO_TCP || CONTEXT->kregs->dx == 0)) { CONTEXT->kregs->dx = IPPROTO_MPTCP; STAP_RETVALUE = 1; } else { STAP_RETVALUE = 0; } %} probe kernel.function(\"__sys_socket\") { cur_proc = execname() if ((cur_proc == @1) && (mptcpify() == 1)) { printf(\"command %16s mptcpified\\n\", cur_proc); } }",
"stap -vg mptcp-app.stap iperf3",
"dmesg [ 1752.694072] Kprobes globally unoptimized [ 1752.730147] stap_1ade3b3356f3e68765322e26dec00c3d_1476: module_layout: kernel tainted. [ 1752.732162] Disabling lock debugging due to kernel taint [ 1752.733468] stap_1ade3b3356f3e68765322e26dec00c3d_1476: loading out-of-tree module taints kernel. [ 1752.737219] stap_1ade3b3356f3e68765322e26dec00c3d_1476: module verification failed: signature and/or required key missing - tainting kernel [ 1752.737219] stap_1ade3b3356f3e68765322e26dec00c3d_1476 (mptcp-app.stap): systemtap: 4.5/0.185, base: ffffffffc0550000, memory: 224data/32text/57ctx/65638net/367alloc kb, probes: 1",
"iperf3 -s Server listening on 5201",
"iperf3 -c 127.0.0.1 -t 3",
"ss -nti '( dport :5201 )' State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 127.0.0.1:41842 127.0.0.1:5201 cubic wscale:7,7 rto:205 rtt:4.455/8.878 ato:40 mss:21888 pmtu:65535 rcvmss:536 advmss:65483 cwnd:10 bytes_sent:141 bytes_acked:142 bytes_received:4 segs_out:8 segs_in:7 data_segs_out:3 data_segs_in:3 send 393050505bps lastsnd:2813 lastrcv:2772 lastack:2772 pacing_rate 785946640bps delivery_rate 10944000000bps delivered:4 busy:41ms rcv_space:43690 rcv_ssthresh:43690 minrtt:0.008 tcp-ulp-mptcp flags:Mmec token:0000(id:0)/2ff053ec(id:0) seq:3e2cbea12d7673d4 sfseq:3 ssnoff:ad3d00f4 maplen:2",
"nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableSYNTX 2 0.0 MPTcpExtMPCapableSYNACKRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0",
"ip mptcp limits set add_addr_accepted 1",
"ip mptcp endpoint add 198.51.100.1 dev enp1s0 signal",
"iperf3 -s Server listening on 5201",
"iperf3 -c 192.0.2.1 -t 3",
"ss -nti '( sport :5201 )'",
"ip mptcp limit show",
"ip mptcp endpoint show",
"nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0 MPTcpExtMPJoinSynRx 2 0.0 MPTcpExtMPJoinAckRx 2 0.0 MPTcpExtEchoAdd 2 0.0",
"echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf",
"[Unit] Description=Set MPTCP subflow limit to 3 After=network.target [Service] ExecStart=ip mptcp limits set subflows 3 Type=oneshot [Install] WantedBy=multi-user.target",
"systemctl enable --now set_mptcp_limit",
"nmcli connection modify <profile_name> connection.mptcp-flags signal,subflow,also-without-default-route",
"sysctl net.mptcp.enabled net.mptcp.enabled = 1",
"ip mptcp limit show add_addr_accepted 2 subflows 3",
"ip mptcp endpoint show 192.0.2.1 id 1 subflow dev enp4s0 198.51.100.1 id 2 subflow dev enp1s0 192.0.2.3 id 3 subflow dev enp7s0 192.0.2.4 id 4 subflow dev enp3s0",
"ip mptcp limits set add_addr_accepted 0 subflows 1",
"nc -l -k -p 12345",
"ip -4 route 192.0.2.0/24 dev enp1s0 proto kernel scope link src 192.0.2.2 metric 100 192.0.2.0/24 dev wlp1s0 proto kernel scope link src 192.0.2.3 metric 600",
"ip mptcp monitor",
"nc 192.0.2.1 12345",
"[ CREATED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345",
"[ ESTABLISHED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345",
"ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345",
"ip mptcp endpoint add dev wlp1s0 192.0.2.3 subflow",
"[SF_ESTABLISHED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3",
"ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345 tcp ESTAB 0 0 192.0.2.3%wlp1s0:53345 192.0.2.1:12345",
"ip mptcp endpoint delete id 2",
"[ SF_CLOSED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3",
"[ CLOSED] token=63c070d2",
"echo \"net.mptcp.enabled=0\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf",
"sysctl -a | grep mptcp.enabled net.mptcp.enabled = 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/getting-started-with-multipath-tcp_configuring-and-managing-networking |
Chapter 11. Using Kerberos | Chapter 11. Using Kerberos Maintaining system security and integrity within a network is critical, and it encompasses every user, application, service, and server within the network infrastructure. It requires an understanding of everything that is running on the network and the manner in which these services are used. At the core of maintaining this security is maintaining access to these applications and services and enforcing that access. Kerberos is an authentication protocol significantly safer than normal password-based authentication. With Kerberos, passwords are never sent over the network, even when services are accessed on other machines. Kerberos provides a mechanism that allows both users and machines to identify themselves to network and receive defined, limited access to the areas and services that the administrator configured. Kerberos authenticates entities by verifying their identity, and Kerberos also secures this authenticating data so that it cannot be accessed and used or tampered with by an outsider. 11.1. About Kerberos Kerberos uses symmetric-key cryptography [3] to authenticate users to network services, which means passwords are never actually sent over the network. Consequently, when users authenticate to network services using Kerberos, unauthorized users attempting to gather passwords by monitoring network traffic are effectively thwarted. 11.1.1. The Basics of How Kerberos Works Most conventional network services use password-based authentication schemes, where a user supplies a password to access a given network server. However, the transmission of authentication information for many services is unencrypted. For such a scheme to be secure, the network has to be inaccessible to outsiders, and all computers and users on the network must be trusted and trustworthy. With simple, password-based authentication, a network that is connected to the Internet cannot be assumed to be secure. Any attacker who gains access to the network can use a simple packet analyzer, or packet sniffer , to intercept user names and passwords, compromising user accounts and, therefore, the integrity of the entire security infrastructure. Kerberos eliminates the transmission of unencrypted passwords across the network and removes the potential threat of an attacker sniffing the network. Rather than authenticating each user to each network service separately as with simple password authentication, Kerberos uses symmetric encryption and a trusted third party (a key distribution center or KDC) to authenticate users to a suite of network services. The computers managed by that KDC and any secondary KDCs constitute a realm . When a user authenticates to the KDC, the KDC sends a set of credentials (a ticket ) specific to that session back to the user's machine, and any Kerberos-aware services look for the ticket on the user's machine rather than requiring the user to authenticate using a password. As shown in Figure 11.1, "Kerberos Authentication" , each user is identified to the KDC with a unique identity, called a principal . When a user on a Kerberos-aware network logs into his workstation, his principal is sent to the KDC as part of a request for a ticket-granting ticket (or TGT) from the authentication server. This request can be sent by the login program so that it is transparent to the user or can be sent manually by a user through the kinit program after the user logs in. The KDC then checks for the principal in its database. If the principal is found, the KDC creates a TGT, encrypts it using the user's key, and sends the TGT to that user. Figure 11.1. Kerberos Authentication The login or kinit program on the client then decrypts the TGT using the user's key, which it computes from the user's password. The user's key is used only on the client machine and is not transmitted over the network. The ticket (or credentials) sent by the KDC are stored in a local store, the credential cache (ccache) , which can be checked by Kerberos-aware services. Red Hat Enterprise Linux 7 supports the following types of credential caches: The persistent KEYRING ccache type, the default cache in Red Hat Enterprise Linux 7 The System Security Services Daemon (SSSD) Kerberos Credential Manager (KCM), an alternative option since Red Hat Enterprise Linux 7.4 FILE DIR MEMORY With SSSD KCM, the Kerberos caches are not stored in a passive store, but managed by a daemon. In this setup, the Kerberos library, which is typically used by applications such as kinit , is a KCM client and the daemon is referred to as a KCM server. Having the Kerberos credential caches managed by the SSSD KCM daemon has several advantages: The daemon is stateful and can perform tasks such as Kerberos credential cache renewals or reaping old ccaches. Renewals and tracking are possible not only for tickets that SSSD itself acquired, typically via a login through pam_sss.so , but also for tickets acquired, for example, though kinit . Since the process runs in user space, it is subject to UID namespacing, unlike the Kernel KEYRING. Unlike the Kernel KEYRING-based cache, which is entirely dependent on the UID of the caller and which, in a containerized environment, is shared among all containers, the KCM server's entry point is a UNIX socket that can be bind-mounted only to selected containers. After authentication, servers can check an unencrypted list of recognized principals and their keys rather than checking kinit ; this is kept in a keytab . The TGT is set to expire after a certain period of time (usually 10 to 24 hours) and is stored in the client machine's credential cache. An expiration time is set so that a compromised TGT is of use to an attacker for only a short period of time. After the TGT has been issued, the user does not have to enter their password again until the TGT expires or until they log out and log in again. Whenever the user needs access to a network service, the client software uses the TGT to request a new ticket for that specific service from the ticket-granting server (TGS). The service ticket is then used to authenticate the user to that service transparently. 11.1.2. About Kerberos Principal Names The principal identifies not only the user or service, but also the realm that the entity belongs to. A principal name has two parts, the identifier and the realm: For a user, the identifier is only the Kerberos user name. For a service, the identifier is a combination of the service name and the host name of the machine it runs on: The service name is a case-sensitive string that is specific to the service type, like host , ldap , http , and DNS . Not all services have obvious principal identifiers; the sshd daemon, for example, uses the host service principal. The host principal is usually stored in /etc/krb5.keytab . When Kerberos requests a ticket, it always resolves the domain name aliases (DNS CNAME records) to the corresponding DNS address (A or AAAA records). The host name from the address record is then used when service or host principals are created. For example: A service attempts to connect to the host using its CNAME alias: The Kerberos server requests a ticket for the resolved host name, [email protected] , so the host principal must be host/[email protected] . 11.1.3. About the Domain-to-Realm Mapping When a client attempts to access a service running on a particular server, it knows the name of the service ( host ) and the name of the server ( foo.example.com ), but because more than one realm can be deployed on the network, it must guess at the name of the Kerberos realm in which the service resides. By default, the name of the realm is taken to be the DNS domain name of the server in all capital letters. In some configurations, this will be sufficient, but in others, the realm name which is derived will be the name of a non-existent realm. In these cases, the mapping from the server's DNS domain name to the name of its realm must be specified in the domain_realm section of the client system's /etc/krb5.conf file. For example: The configuration specifies two mappings. The first mapping specifies that any system in the example.com DNS domain belongs to the EXAMPLE.COM realm. The second specifies that a system with the exact name example.com is also in the realm. The distinction between a domain and a specific host is marked by the presence or lack of an initial period character. The mapping can also be stored directly in DNS using the "_kerberos TXT" records, for example: 11.1.4. Environmental Requirements Kerberos relies on being able to resolve machine names. Thus, it requires a working domain name service (DNS). Both DNS entries and hosts on the network must be properly configured, which is covered in the Kerberos documentation in /usr/share/doc/krb5-server- version-number . Applications that accept Kerberos authentication require time synchronization. You can set up approximate clock synchronization between the machines on the network using a service such as ntpd . For information on the ntpd service, see the documentation in /usr/share/doc/ntp- version-number /html/index.html or the ntpd (8) man page. Note Kerberos clients running Red Hat Enterprise Linux 7 support automatic time adjustment with the KDC and have no strict timing requirements. This enables better tolerance to clocking differences when deploying IdM clients with Red Hat Enterprise Linux 7. 11.1.5. Considerations for Deploying Kerberos Although Kerberos removes a common and severe security threat, it is difficult to implement for a variety of reasons: Kerberos assumes that each user is trusted but is using an untrusted host on an untrusted network. Its primary goal is to prevent unencrypted passwords from being transmitted across that network. However, if anyone other than the proper user has access to the one host that issues tickets used for authentication - the KDC - the entire Kerberos authentication system are at risk. For an application to use Kerberos, its source must be modified to make the appropriate calls into the Kerberos libraries. Applications modified in this way are considered to be Kerberos-aware . For some applications, this can be quite problematic due to the size of the application or its design. For other incompatible applications, changes must be made to the way in which the server and client communicate. Again, this can require extensive programming. Closed source applications that do not have Kerberos support by default are often the most problematic. To secure a network with Kerberos, one must either use Kerberos-aware versions of all client and server applications that transmit passwords unencrypted, or not use that client and server application at all. Migrating user passwords from a standard UNIX password database, such as /etc/passwd or /etc/shadow , to a Kerberos password database can be tedious. There is no automated mechanism to perform this task. Migration methods can vary substantially depending on the particular way Kerberos is deployed. That is why it is recommended that you use the Identity Management feature; it has specialized tools and methods for migration. Warning The Kerberos system can be compromised if a user on the network authenticates against a non-Kerberos aware service by transmitting a password in plain text. The use of non-Kerberos aware services (including telnet and FTP) is highly discouraged. Other encrypted protocols, such as SSH or SSL-secured services, are preferred to unencrypted services, but this is still not ideal. 11.1.6. Additional Resources for Kerberos Kerberos can be a complex service to implement, with a lot of flexibility in how it is deployed. Table 11.1, "External Kerberos Documentation" and Table 11.2, "Important Kerberos Man Pages" list of a few of the most important or most useful sources for more information on using Kerberos. Table 11.1. External Kerberos Documentation Documentation Location Kerberos V5 Installation Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 System Administrator's Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 UNIX User's Guide (in both PostScript and HTML) /usr/share/doc/krb5-workstation- version-number "Kerberos: The Network Authentication Protocol" web page from MIT http://web.mit.edu/kerberos/www/ Designing an Authentication System: a Dialogue in Four Scenes , originally by Bill Bryant in 1988, modified by Theodore Ts'o in 1997. This document is a conversation between two developers who are thinking through the creation of a Kerberos-style authentication system. The conversational style of the discussion makes this a good starting place for people who are completely unfamiliar with Kerberos. http://web.mit.edu/kerberos/www/dialogue.html An article for making a network Kerberos-aware. http://www.ornl.gov/~jar/HowToKerb.html Any of the manpage files can be opened by running man command_name . Table 11.2. Important Kerberos Man Pages Manpage Description Client Applications kerberos An introduction to the Kerberos system which describes how credentials work and provides recommendations for obtaining and destroying Kerberos tickets. The bottom of the man page references a number of related man pages. kinit Describes how to use this command to obtain and cache a ticket-granting ticket. kdestroy Describes how to use this command to destroy Kerberos credentials. klist Describes how to use this command to list cached Kerberos credentials. Administrative Applications kadmin Describes how to use this command to administer the Kerberos V5 database. kdb5_util Describes how to use this command to create and perform low-level administrative functions on the Kerberos V5 database. Server Applications krb5kdc Describes available command line options for the Kerberos V5 KDC. kadmind Describes available command line options for the Kerberos V5 administration server. Configuration Files krb5.conf Describes the format and options available within the configuration file for the Kerberos V5 library. kdc.conf Describes the format and options available within the configuration file for the Kerberos V5 AS and KDC. [3] A system where both the client and the server share a common key that is used to encrypt and decrypt network communication. | [
"identifier @ REALM",
"service/FQDN @ REALM",
"www.example.com CNAME web-01.example.com web-01.example.com A 192.0.2.145",
"ssh www.example.com",
"foo.example.org EXAMPLE.ORG foo.example.com EXAMPLE.COM foo.hq.example.com HQ.EXAMPLE.COM",
"[domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM",
"USDORIGIN example.com _kerberos TXT \"EXAMPLE.COM\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/using_kerberos |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_5_release_notes/making-open-source-more-inclusive_2.4.57-release-notes |
15.4. Creating and Managing Users for a TPS | 15.4. Creating and Managing Users for a TPS There are three defined roles for TPS users, which function as groups for the TPS: Agents , who perform actual token management operations, such setting the token status and changing token policies Administrators , who manage users for the TPS subsystem and have limited control over tokens Operators , who have no management control but are able to view and list tokens, certificates, and activities performed through the TPS Additional groups cannot be added for the TPS. All of the TPS subsystem users are authenticated against an LDAP directory database that contains their certificate (because accessing the TPS's web services requires certificate-based authentication), and the authentication process checks the TPS group entries - ou=TUS Agents , ou=TUS Administrators , and ou=TUS Operators - to see to which roles the user belongs, using Apache's mod_tokendb module. Users for the TPS are added and managed through the Web UI or the CLI. The Web UI is accessible at https:// server.example.com :8443/tps/ui/ . To use the Web UI or the CLI, the TPS administrator has to authenticate using a user certificate. 15.4.1. Listing and Searching for Users 15.4.1.1. From the Web UI To list users from the Web UI: Click the Accounts tab. Click the Users menu item. The list of users appears on the page. To search for certain users, write the keyword in the search field and press Enter . To list all users again, remove the keyword and press Enter . 15.4.1.2. From the Command Line To list users from the CLI, run: To view user details from the CLI, run: 15.4.2. Adding Users 15.4.2.1. From the Web UI To add a user from the Web UI: Click the Accounts tab. Click the Users menu item. Click the Add button on the Users page. Fill in the user ID, full name, and TPS profile. Click the Save button. 15.4.2.1.1. From the Command Line To add a user from the CLI, run: 15.4.3. Setting Profiles for Users A TPS profile is much like a CA profile; it defines rules for processing different types of tokens. The profile is assigned automatically to a token based on some characteristic of the token, like the CUID. Users can only see tokens for the profiles which are assigned to them. Note A user can only see entries relating to the profile configured for it, including both token operations and tokens themselves. For an administrator to be able to search and manage all tokens configured in the TPS, the administrator user entry should be set to All profiles . Setting specific profiles for users is a simple way to control access for operators and agents to specific users or token types. Token profiles are sets of policies and configurations that are applied to a token. Token profiles are mapped to tokens automatically based on some kind of attribute in the token itself, such as a CCUID range. Token profiles are created as other certificate profiles in the CA profile directory and are then added to the TPS configuration file, CS.cfg , to map the CA's token profile to the token type. Configuring token mapping is covered in Section 6.7, "Mapping Resolver Configuration" . To manage user profiles from the Web UI: Click the Accounts tab. Click the Users menu item. Click the user name of the user you want to modify. Click the Edit link. In the TPS Profile field, enter the profile names separated by commas, or enter All Profiles . Click the Save button. 15.4.4. Managing User Roles A role is just a group within the TPS. Each role can view different tabs of the TPS services pages. The group is editable, so it is possible to add and remove role assignments for a user. A user can belong to more than one role or group. The bootstrap user, for example, belongs to all three groups. 15.4.4.1. From the Web UI To manage group members from the Web UI: Click the Accounts tab. Click the Groups menu item. Click the name of the group that you want to change, for example TPS Agents. To add a user to this group: Click the Add button. Enter the user ID. Click the Add button. To remove a user from this group: Select the check box to the user. Click the Remove button. Click the OK button. 15.4.4.2. From the Command Line To list groups from the CLI, run: To list group members from the CLI, run: To add a user to a group from the CLI, run: To delete a user from a group from the CLI, run: 15.4.5. Managing User Certificates User certificates can be managed from the CLI: To list user certificates, run: To add a certificate to a user: Obtain a user certificate for the new user. Requesting and submitting certificates is explained in Chapter 5, Requesting, Enrolling, and Managing Certificates . Important A TPS administrator must have a signing certificate. The recommended profile to use is Manual User Signing and Encryption Certificates Enrollment. Run the following command: To remove a certificate from a user, run: 15.4.6. Renewing TPS Agent and Administrator Certificates Regenerating the certificate takes its original key and its original profile and request, and recreates an identical key with a new validity period and expiration date. The TPS has a bootstrap user that was created at the time the subsystem was created. A new certificate can be requested for this user when their original one expires, using one of the default renewal profiles. Certificates for administrative users can be renewed directly in the end user enrollment forms, using the serial number of the original certificate. Renew the user certificates through the CA's end users forms, as described in Section 5.4.1.1.2, "Certificate-Based Renewal" . This must be the same CA as first issued the certificate (or a clone of it). Agent certificates can be renewed by using the certificate-based renewal form in the end entities page, Self-renew user SSL client certificate . This form recognizes and updates the certificate stored in the browser's certificate store directly. Note It is also possible to renew the certificate using certutil , as described in Section 17.3.3, "Renewing Certificates Using certutil" . Rather than using the certificate stored in a browser to initiate renewal, certutil uses an input file with the original key. Add the new certificate to the user and remove the old certificate as described in Section 15.4.5, "Managing User Certificates" . 15.4.7. Deleting Users Warning It is possible to delete the last user account, and the operation cannot be undone. Be very careful about the user which is selected to be deleted. To delete users from the Web UI: Click the Accounts tab. Click the Users menu item. Select the check box to the users to be deleted. Click the Remove button. Click the OK button. To delete a user from the CLI, run: | [
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-find",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-show username",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-add username --fullName full_name",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-find",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-find group_name",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-add group_name user_name",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-group-member-del group_name user_name",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-find user_name",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-add user_name --serial cert_serial_number",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-cert-del user_name cert_id",
"pki -d client_db_dir -c client_db_password -n admin_cert_nickname tps-user-del user_name"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing-user-and-groups-for_a_TPS |
Chapter 89. JaegerTracing schema reference | Chapter 89. JaegerTracing schema reference The type JaegerTracing has been deprecated. Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the JaegerTracing type from OpenTelemetryTracing . It must have the value jaeger for the type JaegerTracing . Property Property type Description type string Must be jaeger . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-jaegertracing-reference |
Chapter 1. Getting started using the RHEL web console | Chapter 1. Getting started using the RHEL web console Learn how to install the Red Hat Enterprise Linux 8 web console, how to add and manage remote hosts through its convenient graphical interface, and how to monitor the systems managed by the web console. 1.1. What is the RHEL web console The RHEL web console is a web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. The RHEL web console enables you to perform a wide range of administration tasks, including: Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions The RHEL web console uses the same system APIs as you would use in a terminal, and actions performed in a terminal are immediately reflected in the RHEL web console. You can monitor the logs of systems in the network environment, as well as their performance, displayed as graphs. In addition, you can change the settings directly in the web console or through the terminal. 1.2. Installing and enabling the web console To access the RHEL web console, first enable the cockpit.socket service. Red Hat Enterprise Linux 8 includes the web console installed by default in many installation variants. If this is not the case on your system, install the cockpit package before enabling the cockpit.socket service. Procedure If the web console is not installed by default on your installation variant, manually install the cockpit package: Enable and start the cockpit.socket service, which runs a web server: If the web console was not installed by default on your installation variant and you are using a custom firewall profile, add the cockpit service to firewalld to open port 9090 in the firewall: Verification To verify the installation and configuration, open the web console . 1.3. Logging in to the web console When the cockpit.socket service is running and the corresponding firewall port is open, you can log in to the web console in your browser for the first time. Prerequisites Use one of the following browsers to open the web console: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later System user account credentials The RHEL web console uses a specific pluggable authentication modules (PAM) stack at /etc/pam.d/cockpit . The default configuration allows logging in with the user name and password of any local account on the system. Port 9090 is open in your firewall. Procedure In your web browser, enter the following address to access the web console: Note This provides a web-console login on your local machine. If you want to log in to the web console of a remote system, see Section 1.5, "Connecting to the web console from a remote machine" If you use a self-signed certificate, the browser displays a warning. Check the certificate, and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). In the login screen, enter your system user name and password. Click Log In . After successful authentication, the RHEL web console interface opens. Important To switch between limited and administrative access, click Administrative access or Limited access in the top panel of the web console page. You must provide your user password to gain administrative access. 1.4. Disabling basic authentication in the web console You can modify the behavior of an authentication scheme by modifying the cockpit.conf file. Use the none action to disable an authentication scheme and only allow authentication through GSSAPI and forms. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open or create the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text: Save the file. Restart the web console for changes to take effect. 1.5. Connecting to the web console from a remote machine You can connect to your web console interface from any client operating system and also from mobile phones or tablets. Prerequisites A device with a supported internet browser, such as: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later The RHEL 8 you want to access with an installed and accessible web console. For instructions, see Installing and enabling the web console . Procedure Open your web browser. Type the remote server's address in one of the following formats: With the server's host name: For example: With the server's IP address: For example: After the login interface opens, log in with your RHEL system credentials. 1.6. Logging in to the web console using a one-time password If your system is part of an Identity Management (IdM) domain with enabled one-time password (OTP) configuration, you can use an OTP to log in to the RHEL web console. Important It is possible to log in using a one-time password only if your system is part of an Identity Management (IdM) domain with enabled OTP configuration. For more information about OTP in IdM, see One-time password in Identity Management . Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . An Identity Management server with enabled OTP configuration. For details, see One-time password in Identity Management . A configured hardware or software device generating OTP tokens. Procedure Open the RHEL web console in your browser: Locally: https://localhost:PORT_NUMBER Remotely with the server hostname: https://example.com:PORT_NUMBER Remotely with the server IP address: https://EXAMPLE.SERVER.IP.ADDR:PORT_NUMBER If you use a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). The Login window opens. In the Login window, enter your system user name and password. Generate a one-time password on your device. Enter the one-time password into a new field that appears in the web console interface after you confirm your password. Click Log in . Successful login takes you to the Overview page of the web console interface. 1.7. Joining a RHEL 8 system to an IdM domain using the web console You can use the web console to join the Red Hat Enterprise Linux 8 system to the Identity Management (IdM) domain. Prerequisites The IdM domain is running and reachable from the client you want to join. You have the IdM domain administrator credentials. You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Configuration field of the Overview tab click Join Domain . In the Join a Domain dialog box, enter the host name of the IdM server in the Domain Address field. In the Domain administrator name field, enter the user name of the IdM administration account. In the Domain administrator password , add a password. Click Join . Verification If the RHEL 8 web console did not display an error, the system has been joined to the IdM domain and you can see the domain name in the System screen. To verify that the user is a member of the domain, click the Terminal page and type the id command: Additional resources Planning Identity Management Installing Identity Management Managing IdM users, groups, hosts, and access control rules 1.8. Adding a banner to the login page You can set the web console to show a content of a banner file on the login screen. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open the /etc/issue.cockpit file in a text editor of your preference: Add the content you want to display as the banner to the file, for example: You cannot include any macros in the file, but you can use line breaks and ASCII art. Save the file. Open the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text to the file: Save the file. Restart the web console for changes to take effect. Verification Open the web console login screen again to verify that the banner is now visible: 1.9. Configuring automatic idle lock in the web console You can enable the automatic idle lock and set the idle timeout for your system through the web console interface. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . Procedure Open the cockpit.conf file in the /etc/cockpit/ directory in a text editor of your preference, for example: Add the following text to the file: Substitute <X> with a number for a time period of your choice in minutes. Save the file. Restart the web console for changes to take effect. Verification Check if the session logs you out after a set period of time. 1.10. Changing the web console listening port By default, the RHEL web console communicates through TCP port 9090. You can change the port number by overriding the default socket settings. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . You have root privileges or permissions to enter administrative commands with sudo . The firewalld service is running. Procedure Pick an unoccupied port, for example, <4488/tcp> , and instruct SELinux to allow the cockpit service to bind to that port: Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the ValueError: Port already defined error message. Open the new port and close the former one in the firewall: Create an override file for the cockpit.socket service: In the following editor screen, which opens an empty override.conf file located in the /etc/systemd/system/cockpit.socket.d/ directory, change the default port for the web console from 9090 to the previously picked number by adding the following lines: Note that the first ListenStream= directive with an empty value is intentional. You can declare multiple ListenStream directives in a single socket unit and the empty value in the drop-in file resets the list and disables the default port 9090 from the original unit. Important Insert the code snippet between the lines starting with # Anything between here and # Lines below this . Otherwise, the system discards your changes. Save the changes by pressing Ctrl + O and Enter . Exit the editor by pressing Ctrl + X . Reload the changed configuration: Check that your configuration is working: Restart cockpit.socket : Verification Open your web browser, and access the web console on the updated port, for example: Additional resources firewall-cmd(1) , semanage(8) , systemd.unit(5) , and systemd.socket(5) man pages on your system | [
"yum install cockpit",
"systemctl enable --now cockpit.socket",
"firewall-cmd --add-service=cockpit --permanent firewall-cmd --reload",
"https://localhost:9090",
"vi cockpit.conf",
"[basic] action = none",
"systemctl try-restart cockpit",
"https:// <server.hostname.example.com> : <port-number>",
"https://example.com:9090",
"https:// <server.IP_address> : <port-number>",
"https://192.0.2.2:9090",
"id euid=548800004(example_user) gid=548800004(example_user) groups=548800004(example_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"vi /etc/issue.cockpit",
"This is an example banner for the RHEL web console login page.",
"vi /etc/cockpit/cockpit.conf",
"[Session] Banner=/etc/issue.cockpit",
"systemctl try-restart cockpit",
"vi /etc/cockpit/cockpit.conf",
"[Session] IdleTimeout= <X>",
"systemctl try-restart cockpit",
"semanage port -a -t websm_port_t -p tcp <4488>",
"firewall-cmd --service cockpit --permanent --add-port= <4488> /tcp firewall-cmd --service cockpit --permanent --remove-port=9090/tcp",
"systemctl edit cockpit.socket",
"[Socket] ListenStream= ListenStream= <4488>",
"systemctl daemon-reload",
"systemctl show cockpit.socket -p Listen Listen=[::]:4488 (Stream)",
"systemctl restart cockpit.socket",
"https://machine1.example.com:4488"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_systems_using_the_rhel_8_web_console/getting-started-with-the-rhel-8-web-console_system-management-using-the-RHEL-8-web-console |
Chapter 1. Updating Red Hat Enterprise Linux AI | Chapter 1. Updating Red Hat Enterprise Linux AI Red Hat Enterprise Linux AI allows you to update your instance so you can use the latest version of RHEL AI and InstructLab. 1.1. Updating your RHEL AI instance You can update your instance to use the latest version of RHEL AI and InstructLab Prerequisets You installed and deployed a Red Hat Enterprise Linux AI instance on one of the supported platforms. You created a Red Hat registry account. Procedure Log into your Red Hat registry account with the podman command: USD sudo podman login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json Or you can log in with the skopeo command: USD sudo skopeo login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json Upgrading to a minor version of Red Hat Enterprise Linux AI You can upgrade your instance to use the latest version of Red Hat Enterprise Linux AI by running the following command: USD sudo bootc switch <latest-rhelai-image> where <path-to-latest-image> Specify the latest version of the RHEL AI image. Example command USD sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2 Restart your system by running the following command: USD sudo reboot -n Upgrading to a z-stream version of Red Hat Enterprise Linux AI If a z-stream exists, you can upgrade your system to a z-stream version of RHEL AI by running the folllowing command: USD sudo bootc upgrade --apply | [
"sudo podman login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo skopeo login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo bootc switch <latest-rhelai-image>",
"sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2",
"sudo reboot -n",
"sudo bootc upgrade --apply"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/updating/updating_system |
Chapter 2. OpenShift Container Platform architecture | Chapter 2. OpenShift Container Platform architecture 2.1. Introduction to OpenShift Container Platform OpenShift Container Platform is a platform for developing and running containerized applications. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1.1. About Kubernetes Although container images and the containers that run from them are the primary building blocks for modern application development, to run them at scale requires a reliable and flexible distribution system. Kubernetes is the defacto standard for orchestrating containers. Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The general concept of Kubernetes is fairly simple: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. Using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. In only a few years, Kubernetes has seen massive cloud and on-premise adoption. The open source development model allows many people to extend Kubernetes by implementing different technologies for components such as networking, storage, and authentication. 2.1.2. The benefits of containerized applications Using containerized applications offers many advantages over using traditional deployment methods. Where applications were once expected to be installed on operating systems that included all their dependencies, containers let an application carry their dependencies with them. Creating containerized applications offers many benefits. 2.1.2.1. Operating system benefits Containers use small, dedicated Linux operating systems without a kernel. Their file system, networking, cgroups, process tables, and namespaces are separate from the host Linux system, but the containers can integrate with the hosts seamlessly when necessary. Being based on Linux allows containers to use all the advantages that come with the open source development model of rapid innovation. Because each container uses a dedicated operating system, you can deploy applications that require conflicting software dependencies on the same host. Each container carries its own dependent software and manages its own interfaces, such as networking and file systems, so applications never need to compete for those assets. 2.1.2.2. Deployment and scaling benefits If you employ rolling upgrades between major releases of your application, you can continuously improve your applications without downtime and still maintain compatibility with the current release. You can also deploy and test a new version of an application alongside the existing version. If the container passes your tests, simply deploy more new containers and remove the old ones. Since all the software dependencies for an application are resolved within the container itself, you can use a standardized operating system on each host in your data center. You do not need to configure a specific operating system for each application host. When your data center needs more capacity, you can deploy another generic host system. Similarly, scaling containerized applications is simple. OpenShift Container Platform offers a simple, standard way of scaling any containerized service. For example, if you build applications as a set of microservices rather than large, monolithic applications, you can scale the individual microservices individually to meet demand. This capability allows you to scale only the required services instead of the entire application, which can allow you to meet application demands while using minimal resources. 2.1.3. OpenShift Container Platform overview OpenShift Container Platform provides enterprise-ready enhancements to Kubernetes, including the following enhancements: Hybrid cloud deployments. You can deploy OpenShift Container Platform clusters to a variety of public cloud platforms or in your data center. Integrated Red Hat technology. Major components in OpenShift Container Platform come from Red Hat Enterprise Linux (RHEL) and related Red Hat technologies. OpenShift Container Platform benefits from the intense testing and certification initiatives for Red Hat's enterprise quality software. Open source development model. Development is completed in the open, and the source code is available from public software repositories. This open collaboration fosters rapid innovation and development. Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform 4.9 offers. The following sections describe some unique features and benefits of OpenShift Container Platform. 2.1.3.1. Custom operating system OpenShift Container Platform uses Red Hat Enterprise Linux CoreOS (RHCOS), a container-oriented operating system that is specifically designed for running containerized applications from OpenShift Container Platform and works with new tools to provide fast installation, Operator-based management, and simplified upgrades. RHCOS includes: Ignition, which OpenShift Container Platform uses as a firstboot system configuration for initially bringing up and configuring machines. CRI-O, a Kubernetes native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers. It fully replaces the Docker Container Engine, which was used in OpenShift Container Platform 3. Kubelet, the primary node agent for Kubernetes that is responsible for launching and monitoring containers. In OpenShift Container Platform 4.9, you must use RHCOS for all control plane machines, but you can use Red Hat Enterprise Linux (RHEL) as the operating system for compute machines, which are also known as worker machines. If you choose to use RHEL workers, you must perform more system maintenance than if you use RHCOS for all of the cluster machines. 2.1.3.2. Simplified installation and update process With OpenShift Container Platform 4.9, if you have an account with the right permissions, you can deploy a production cluster in supported clouds by running a single command and providing a few values. You can also customize your cloud installation or install your cluster in your data center if you use a supported platform. For clusters that use RHCOS for all machines, updating, or upgrading, OpenShift Container Platform is a simple, highly-automated process. Because OpenShift Container Platform completely controls the systems and services that run on each machine, including the operating system itself, from a central control plane, upgrades are designed to become automatic events. If your cluster contains RHEL worker machines, the control plane benefits from the streamlined update process, but you must perform more tasks to upgrade the RHEL machines. 2.1.3.3. Other key features Operators are both the fundamental unit of the OpenShift Container Platform 4.9 code base and a convenient way to deploy applications and software components for your applications to use. In OpenShift Container Platform, Operators serve as the platform foundation and remove the need for manual upgrades of operating systems and control plane applications. OpenShift Container Platform Operators such as the Cluster Version Operator and Machine Config Operator allow simplified, cluster-wide management of those critical components. Operator Lifecycle Manager (OLM) and the OperatorHub provide facilities for storing and distributing Operators to people developing and deploying applications. The Red Hat Quay Container Registry is a Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters. Quay.io is a public registry version of Red Hat Quay that stores millions of images and tags. Other enhancements to Kubernetes in OpenShift Container Platform include improvements in software defined networking (SDN), authentication, log aggregation, monitoring, and routing. OpenShift Container Platform also offers a comprehensive web console and the custom OpenShift CLI ( oc ) interface. 2.1.3.4. OpenShift Container Platform lifecycle The following figure illustrates the basic OpenShift Container Platform lifecycle: Creating an OpenShift Container Platform cluster Managing the cluster Developing and deploying applications Scaling up applications Figure 2.1. High level OpenShift Container Platform overview 2.1.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/architecture/architecture |
20.4. Managing the Password Policy | 20.4. Managing the Password Policy A password policy minimizes the risks of using passwords by enforcing a certain level of security. For example, a password policy can define that: Users must change their passwords according to a schedule. Users must provide non-trivial passwords. The password syntax must meet certain complexity requirements. Warning When using a password administrator account or the Directory Manager (root DN) to set a password, password policies are bypassed and not verified. Do not use these accounts for regular user password management. Use them only to perform password administration tasks that require bypassing the password policies. Directory Server supports fine-grained password policy, so password policies can be applied to the entire directory ( global password policy), a particular subtree ( subtree-level or local password policy), or a particular user ( user-level or local password policy). The complete password policy applied to a user account is comprised of the following elements: The type or level of password policy checks. This information indicates whether the server should check for and enforce a global password policy or local (subtree/user-level) password policies. Password policies work in an inverted pyramid, from general to specific. A global password policy is superseded by a subtree-level password policy, which is superseded by a user-level password policy. Only one password policy is enforced for the entry; password policies are not additive. This means that if a particular attribute is configured in the global or subtree-level policy, but not in the user-level password policy, the attribute is not used for the user when a login is attempted because the active, applied policy is the user-level policy. Password add and modify information. The password information includes password syntax and password history details. Bind information. The bind information includes the number of grace logins permitted, password aging attributes, and tracking bind failures. Note After establishing a password policy, user passwords can be protected from potential threats by configuring an account lockout policy. Account lockout protects against hackers who try to break into the directory by repeatedly guessing a user's password. 20.4.1. Configuring the Global Password Policy By default, global password policy settings are disabled. This section provides some examples how to configure a global password policy. Note After configuring the password policy, configure an account lockout policy. For details, see Section 20.9, "Configuring a Password-Based Account Lockout Policy" . 20.4.1.1. Configuring a Global Password Policy Using the Command Line Use the dsconf utility to display and edit the global password policy settings: Display the current settings: Adjust the password policy settings. For example, to enable the password syntax check and set the minimum length of passwords to 12 characters, enter: For a full list of available settings, enter: Enable the password policy: 20.4.1.2. Configuring a Global Password Policy Using the Web Console To configure a global password policy using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. In the Password Policies menu, select Global Policy . Set the global password policy settings. You can set parameters in the following categories: General settings, such as the password storage scheme Password expiration settings, such as the time when a password expires. Account lockout settings, such as after how many failed login attempts an account should be locked. Password syntax settings, such as the minimum password length. To display a tool tip and the corresponding attribute name in the cn=config entry for a parameter, hover the mouse cursor over the setting. For further details, see the parameter's description in the Red Hat Directory Server Configuration, Command, and File Reference . Click Save . 20.4.2. Using Local Password Policies In contrast to a global password policy, which defines settings for the entire directory, a local password policy is a policy for a specific user or subtree. When the fine-grained password policy does not set the password syntax, you can inherit the syntax from the global policy if the nsslapd-pwpolicy-inherit-global parameter is on. If the --pwpinheritglobal option is defined, the passwordchecksyntax option is set to OFF in the local policy and to ON in the global policy, you can inherit the following attributes from the global policy to the local policy: passwordchecksyntax passwordminlength passwordmindigits passwordminalphas passwordminuppers passwordminlowers passwordminspecials passwordmin8bit passwordmaxrepeats passwordmincategories passwordmintokenlength 20.4.2.1. Where Directory Server Stores Local Password Policy Entries When you use the dsconf localpwp adduser or dsconf localpwp addsubtree commands, Directory Server creates automatically an entry to store the policy attributes: For a subtree (for example, ou=people,dc=example,dc=com ), the following entries are added: A container entry ( nsPwPolicyContainer ) at the subtree level for holding various password policy-related entries for the subtree and all its children. For example: The actual password policy specification entry ( nsPwPolicyEntry ) for holding all the password policy attributes that are specific to the subtree. For example: The CoS template entry ( nsPwTemplateEntry ) that has the pwdpolicysubentry value pointing to the above ( nsPwPolicyEntry ) entry. For example: The CoS specification entry at the subtree level. For example: For a user (for example, uid= user_name ,ou=people,dc=example,dc=com ), the following entries are added: A container entry ( nsPwPolicyContainer ) at the parent level for holding various password policy related entries for the user and all its children. For example: The actual password policy specification entry ( nsPwPolicyEntry ) for holding the password policy attributes that are specific to the user. For example: 20.4.2.2. Configuring a Local Password Policy To configure a local password policy: Note Currently, you can only set up a local password policy using the command line. Verify if a local password policy already exists for the subtree or user entry. For example: If no local policy exists, create one: To create a subtree password policy: To create a user password policy: Important When you create a new local policy, the commands automatically sets the nsslapd-pwpolicy-local parameter in the cn=config entry to on . If the local password policy should not be enabled, manually set the parameter to off : Set local policy attributes. For example, to enable password expiration and set the maximum password age to 14 days ( 1209600 seconds): On a subtree password policy: On a user password policy: For a full list of available settings, enter: | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy get Global Password Policy: cn=config ------------------------------------ passwordstoragescheme: PBKDF2_SHA256 passwordChange: on passwordMustChange: off passwordHistory: off passwordInHistory: 6",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --pwdchecksyntax=on --pwdmintokenlen=12",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --help",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --pwdlockout on",
"dn: cn=nsPwPolicyContainer,ou=people,dc=example,dc=com objectClass: top objectClass: nsContainer cn: nsPwPolicyContainer",
"dn: cn=\"cn=nsPwPolicyEntry,ou=people,dc=example,dc=com\", cn=nsPwPolicyContainer,ou=people,dc=example,dc=com objectclass: top objectclass: extensibleObject objectclass: ldapsubentry objectclass: passwordpolicy",
"dn: cn=\"cn=nsPwTemplateEntry,ou=people,dc=example,dc=com\", cn=nsPwPolicyContainer,ou=people,dc=example,dc=com objectclass: top objectclass: extensibleObject objectclass: costemplate objectclass: ldapsubentry cosPriority: 1 pwdpolicysubentry: cn=\"cn=nsPwPolicyEntry,ou=people,dc=example,dc=com\", cn=nsPwPolicyContainer,ou=people,dc=example,dc=com",
"dn: cn=newpwdpolicy_cos,ou=people,dc=example,dc=com objectclass: top objectclass: LDAPsubentry objectclass: cosSuperDefinition objectclass: cosPointerDefinition cosTemplateDn: cn=cn=nsPwTemplateEntry\\,ou=people\\,dc=example,dc=com, cn=nsPwPolicyContainer,ou=people,dc=example,dc=com cosAttribute: pwdpolicysubentry default operational",
"dn: cn=nsPwPolicyContainer,ou=people,dc=example,dc=com objectClass: top objectClass: nsContainer cn: nsPwPolicyContainer",
"dn: cn=\"cn=nsPwPolicyEntry,uid= user_name ,ou=people,dc=example,dc=com\", cn=nsPwPolicyContainer,ou=people,dc=example,dc=com objectclass: top objectclass: extensibleObject objectclass: ldapsubentry objectclass: passwordpolicy",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp get \"ou=People,dc=example,dc=com\" Error: The policy wasn't set up for the target dn entry or it is invalid",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp addsubtree \"ou=People,dc=example,dc=com\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp adduser \"uid= user_name ,ou=People,dc=example,dc=com\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy set --pwdlocal off",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp set --pwdexpire=on --pwdmaxage=1209600 \"ou=People,dc=example,dc=com\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp set --pwdexpire=on --pwdmaxage=1209600 \"uid= user_name ,ou=People,dc=example,dc=com\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com localpwp set --help"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/user_account_management-managing_the_password_policy |
Logging | Logging OpenShift Container Platform 4.18 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"__error__ JSONParserErr __error_details__ Value looks like object, but can't find closing '}' symbol",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: managementState: Managed outputs: - name: <output_name> type: http http: headers: 1 h1: v1 h2: v2 authentication: username: key: username secretName: <http_auth_secret> password: key: password secretName: <http_auth_secret> timeout: 300 proxyURL: <proxy_url> 2 url: <url> 3 tls: insecureSkipVerify: 4 ca: key: <ca_certificate> secretName: <secret_name> 5 pipelines: - inputRefs: - application name: pipe1 outputRefs: - <output_name> 6 serviceAccount: name: <service_account_name> 7",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector spec: managementState: Managed outputs: - name: rsyslog-east 1 syslog: appName: <app_name> 2 enrichment: KubernetesMinimal facility: <facility_value> 3 msgId: <message_ID> 4 payloadKey: <record_field> 5 procId: <process_ID> 6 rfc: <RFC3164_or_RFC5424> 7 severity: informational 8 tuning: deliveryMode: <AtLeastOnce_or_AtMostOnce> 9 url: <url> 10 tls: 11 ca: key: ca-bundle.crt secretName: syslog-secret type: syslog pipelines: - inputRefs: 12 - application name: syslog-east 13 outputRefs: - rsyslog-east serviceAccount: 14 name: logcollector",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: enrichment: KubernetesMinimal: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.example.com:6514 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: 2 otlp: {}",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: drop: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/logging/index |
8.2.6. Other Interfaces | 8.2.6. Other Interfaces Other common interface configuration files include the following: ifcfg-lo - A local loopback interface is often used in testing, as well as being used in a variety of applications that require an IP address pointing back to the same system. Any data sent to the loopback device is immediately returned to the host's network layer. Warning Never edit the loopback interface script, /etc/sysconfig/network-scripts/ifcfg-lo , manually. Doing so can prevent the system from operating correctly. ifcfg-irlan0 - An infrared interface allows information between devices, such as a laptop and a printer, to flow over an infrared link. This works in a similar way to an Ethernet device except that it commonly occurs over a peer-to-peer connection. ifcfg-plip0 - A Parallel Line Interface Protocol (PLIP) connection works much the same way as an Ethernet device, except that it utilizes a parallel port. ifcfg-tr0 - Token Ring topologies are not as common on Local Area Networks ( LANs ) as they once were, having been eclipsed by Ethernet. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-networkscripts-interfaces-other |
Chapter 11. Secondary networks | Chapter 11. Secondary networks You can configure the Network Observability Operator to collect and enrich network flow data from secondary networks, such as SR-IOV and OVN-Kubernetes. Prerequisites Access to an OpenShift Container Platform cluster with an additional network interface, such as a secondary interface or an L2 network. 11.1. Configuring monitoring for SR-IOV interface traffic In order to collect traffic from a cluster with a Single Root I/O Virtualization (SR-IOV) device, you must set the FlowCollector spec.agent.ebpf.privileged field to true . Then, the eBPF agent monitors other network namespaces in addition to the host network namespaces, which are monitored by default. When a pod with a virtual functions (VF) interface is created, a new network namespace is created. With SRIOVNetwork policy IPAM configurations specified, the VF interface is migrated from the host network namespace to the pod network namespace. Prerequisites Access to an OpenShift Container Platform cluster with a SR-IOV device. The SRIOVNetwork custom resource (CR) spec.ipam configuration must be set with an IP address from the range that the interface lists or from other plugins. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for SR-IOV monitoring apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1 1 The spec.agent.ebpf.privileged field value must be set to true to enable SR-IOV monitoring. Additional resources * Creating an additional SR-IOV network attachment with the CNI VRF plugin . 11.2. Configuring virtual machine (VM) secondary network interfaces for Network Observability You can observe network traffic on an OpenShift Virtualization setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through OVN-Kubernetes. Network flows coming from VMs that are connected to the default internal pod network are automatically captured by Network Observability. Procedure Get information about the virtual machine launcher pod by running the following command. This information is used in Step 5: USD oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.2.39" ], "mac": "0a:58:0a:81:02:27", "default": true, "dns": {} }, { "name": "my-vms/l2-network", 1 "interface": "podc0f69e19ba2", 2 "ips": [ 3 "10.10.10.15" ], "mac": "02:fb:f8:00:00:12", 4 "dns": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: # ... status: # ... 1 The name of the secondary network. 2 The network interface name of the secondary network. 3 The list of IPs used by the secondary network. 4 The MAC address used for secondary network. In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure FlowCollector based on the information you found from the additional network investigation: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4 # ... <.> Ensure that the eBPF agent is in privileged mode so that flows are collected for secondary interfaces. <.> Define the fields to use for indexing the virtual machine launcher pods. It is recommended to use the MAC address as the indexing field to get network flows enrichment for secondary interfaces. If you have overlapping MAC address between pods, then additional indexing fields, such as IP and Interface , could be added to have accurate enrichment. <.> If your additional network information has a MAC address, add MAC to the field list. <.> Specify the name of the network found in the k8s.v1.cni.cncf.io/network-status annotation. Usually <namespace>/<network_attachement_definition_name>. Observe VM traffic: Navigate to the Network Traffic page. Filter by Source IP using your virtual machine IP found in k8s.v1.cni.cncf.io/network-status annotation. View both Source and Destination fields, which should be enriched, and identify the VM launcher pods and the VM instance as owners. | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1",
"oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_observability/network-observability-secondary-networks |
5.3.3. Splitting the Volume Group | 5.3.3. Splitting the Volume Group To create the new volume group yourvg , use the vgsplit command to split the volume group myvg . Before you can split the volume group, the logical volume must be inactive. If the file system is mounted, you must unmount the file system before deactivating the logical volume. You can deactivate the logical volumes with the lvchange command or the vgchange command. The following command deactivates the logical volume mylv and then splits the volume group yourvg from the volume group myvg , moving the physical volume /dev/sdc1 into the new volume group yourvg . You can use the vgs command to see the attributes of the two volume groups. | [
"lvchange -a n /dev/myvg/mylv vgsplit myvg yourvg /dev/sdc1 Volume group \"yourvg\" successfully split from \"myvg\"",
"vgs VG #PV #LV #SN Attr VSize VFree myvg 2 1 0 wz--n- 34.30G 10.80G yourvg 1 0 0 wz--n- 17.15G 17.15G"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vol_splitting_ex3 |
Chapter 4. Commonly required logs for troubleshooting | Chapter 4. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 4.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. | [
"oc logs <pod-name> -n <namespace>",
"oc logs rook-ceph-operator-<ID> -n openshift-storage",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc cluster-info dump -n openshift-storage --output-directory=<directory-name>",
"oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>",
"oc logs <ocs-operator> -n openshift-storage",
"oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'",
"oc get events --sort-by=metadata.creationTimestamp -n openshift-storage",
"oc get csv -n openshift-storage",
"NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.16.0 NooBaa Operator 4.16.0 Succeeded ocs-operator.v4.16.0 OpenShift Container Storage 4.16.0 Succeeded odf-csi-addons-operator.v4.16.0 CSI Addons 4.16.0 Succeeded odf-operator.v4.16.0 OpenShift Data Foundation 4.16.0 Succeeded",
"oc get subs -n openshift-storage",
"NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.16-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.16 ocs-operator-stable-4.16-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.16 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.16 odf-operator odf-operator redhat-operators stable-4.16",
"oc get installplan -n openshift-storage",
"oc get pods -o wide | grep <component-name>",
"oc get pods -o wide | grep rook-ceph-operator",
"rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>",
"oc debug node/<node name>",
"chroot /host",
"crictl images | grep <component>",
"crictl images | grep rook-ceph"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/commonly-required-logs_rhodf |
Chapter 7. Performing and configuring basic builds | Chapter 7. Performing and configuring basic builds The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs , deleting BuildConfigs , viewing build details, and accessing build logs. 7.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To manually start a build, enter the following command: USD oc start-build <buildconfig_name> 7.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 7.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 7.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 7.1.4. Starting a build with source Rather than relying on a Git source pull or a Dockerfile for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure Start a build from a source using the following command to send the contents of a local Git repository as an archive from the tag v2 : USD oc start-build hello-world --from-repo=../hello-world --commit=v2 7.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 7.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 7.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 7.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 7.3. Editing a BuildConfig To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective. You can use either of the following views to edit a BuildConfig : The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes. The YAML view enables you to edit your BuildConfig with full control over the operations. You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa. Procedure In the Builds view of the Developer perspective, click the menu to see the Edit BuildConfig option. Click Edit BuildConfig to see the Form view option. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated. Optional: Click Show Advanced Git Options to add details such as: Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application. Context Dir to specify the subdirectory that contains code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. In the Build from section, select the option that you would like to build from. You can use the following options: Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to. Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to. Docker image : The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value , or Add from ConfigMap and Secret . Optional: To further customize your application, use the following advanced options: Trigger Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret . Secrets Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point . Policy Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run. Hooks Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type , Command , and Arguments to append to the command. Click Save to save the BuildConfig . 7.4. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 7.5. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Docker or Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 7.6. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 7.6.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 7.6.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 7.6.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level. | [
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/basic-build-operations |
Chapter 103. Paho MQTT 5 | Chapter 103. Paho MQTT 5 Both producer and consumer are supported Paho MQTT5 component provides connector for the MQTT messaging protocol using the Eclipse Paho library with MQTT v5. Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go. 103.1. Dependencies When using paho-mqtt5 with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5-starter</artifactId> </dependency> 103.2. URI format Where topic is the name of the topic. 103.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 103.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 103.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 103.4. Component Options The Paho MQTT 5 component supports 32 options, which are listed below. Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanStart (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String configuration (common) To use the shared Paho configuration. PahoMqtt5Configuration connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoMqtt5Persistence qos (common) Client quality of service level (0-2). 2 int receiveMaximum (common) Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. 65535 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String sessionExpiryInterval (common) Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. -1 long willMqttProperties (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. MqttProperties willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). 1 int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) To use a shared Paho client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Map executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 103.5. Endpoint Options The Paho MQTT 5 endpoint is configured using URI syntax: with the following path and query parameters: 103.5.1. Path Parameters (1 parameters) Name Description Default Type topic (common) Required Name of the topic. String 103.5.2. Query Parameters (32 parameters) Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanStart (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoMqtt5Persistence qos (common) Client quality of service level (0-2). 2 int receiveMaximum (common) Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. 65535 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String sessionExpiryInterval (common) Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. -1 long willMqttProperties (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. MqttProperties willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). 1 int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean client (advanced) To use an existing mqtt client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Map executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 103.6. Headers The following headers are recognized by the Paho component: Header Java constant Endpoint type Value type Description CamelMqttTopic PahoConstants.MQTT_TOPIC Consumer String The name of the topic CamelMqttQoS PahoConstants.MQTT_QOS Consumer Integer QualityOfService of the incoming message CamelPahoOverrideTopic PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC Producer String Name of topic to override and send to instead of topic specified on endpoint 103.7. Default payload type By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message: // Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload); But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String (and conversely): // Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload); 103.8. Samples For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router: from("paho:some/queue") .to("mock:test"); While the snippet below sends message to the MQTT broker: from("direct:test") .to("paho:some/target/queue"); For example this is how to read messages from the remote MQTT broker: from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") .to("mock:test"); And here we override the default topic and set to a dynamic topic from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("USD{header.customerId}")) .to("paho:some/target/queue"); 103.9. Spring Boot Auto-Configuration The component supports 33 options, which are listed below. Name Description Default Type camel.component.paho-mqtt5.automatic-reconnect Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true Boolean camel.component.paho-mqtt5.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.paho-mqtt5.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.paho-mqtt5.broker-url The URL of the MQTT broker. tcp://localhost:1883 String camel.component.paho-mqtt5.clean-start Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true Boolean camel.component.paho-mqtt5.client To use a shared Paho client. The option is a org.eclipse.paho.mqttv5.client.MqttClient type. MqttClient camel.component.paho-mqtt5.client-id MQTT client identifier. The identifier must be unique. String camel.component.paho-mqtt5.configuration To use the shared Paho configuration. The option is a org.apache.camel.component.paho.mqtt5.PahoMqtt5Configuration type. PahoMqtt5Configuration camel.component.paho-mqtt5.connection-timeout Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 Integer camel.component.paho-mqtt5.custom-web-socket-headers Sets the Custom WebSocket Headers for the WebSocket Connection. Map camel.component.paho-mqtt5.enabled Whether to enable auto configuration of the paho-mqtt5 component. This is enabled by default. Boolean camel.component.paho-mqtt5.executor-service-timeout Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 Integer camel.component.paho-mqtt5.file-persistence-directory Base directory used by file persistence. Will by default use user directory. String camel.component.paho-mqtt5.https-hostname-verification-enabled Whether SSL HostnameVerifier is enabled or not. The default value is true. true Boolean camel.component.paho-mqtt5.keep-alive-interval Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 Integer camel.component.paho-mqtt5.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.paho-mqtt5.max-reconnect-delay Get the maximum time (in millis) to wait between reconnects. 128000 Integer camel.component.paho-mqtt5.password Password to be used for authentication against the MQTT broker. String camel.component.paho-mqtt5.persistence Client persistence to be used - memory or file. PahoMqtt5Persistence camel.component.paho-mqtt5.qos Client quality of service level (0-2). 2 Integer camel.component.paho-mqtt5.receive-maximum Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. 65535 Integer camel.component.paho-mqtt5.retained Retain option. false Boolean camel.component.paho-mqtt5.server-u-r-is Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String camel.component.paho-mqtt5.session-expiry-interval Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. -1 Long camel.component.paho-mqtt5.socket-factory Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. The option is a javax.net.SocketFactory type. SocketFactory camel.component.paho-mqtt5.ssl-client-props Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. The option is a java.util.Properties type. Properties camel.component.paho-mqtt5.ssl-hostname-verifier Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. HostnameVerifier camel.component.paho-mqtt5.user-name Username to be used for authentication against the MQTT broker. String camel.component.paho-mqtt5.will-mqtt-properties Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. The option is a org.eclipse.paho.mqttv5.common.packet.MqttProperties type. MqttProperties camel.component.paho-mqtt5.will-payload Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. String camel.component.paho-mqtt5.will-qos Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). 1 Integer camel.component.paho-mqtt5.will-retained Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. false Boolean camel.component.paho-mqtt5.will-topic Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5-starter</artifactId> </dependency>",
"paho-mqtt5:topic[?options]",
"paho-mqtt5:topic",
"// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody(\"paho:topic\"); // Send payload byte[] payload = \"message\".getBytes(); producerTemplate.sendBody(\"paho:topic\", payload);",
"// Receive payload String payload = consumerTemplate.receiveBody(\"paho:topic\", String.class); // Send payload String payload = \"message\"; producerTemplate.sendBody(\"paho:topic\", payload);",
"from(\"paho:some/queue\") .to(\"mock:test\");",
"from(\"direct:test\") .to(\"paho:some/target/queue\");",
"from(\"paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883\") .to(\"mock:test\");",
"from(\"direct:test\") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple(\"USD{header.customerId}\")) .to(\"paho:some/target/queue\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-paho-mqtt5-component-starter |
probe::kprocess.exec | probe::kprocess.exec Name probe::kprocess.exec - Attempt to exec to a new program Synopsis Values filename The path to the new executable Context The caller of exec. Description Fires whenever a process attempts to exec to a new program. | [
"kprocess.exec"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kprocess-exec |
Chapter 19. Configuring the cluster-wide proxy | Chapter 19. Configuring the cluster-wide proxy Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. 19.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. System-wide proxy affects system components only, not user workloads. Add sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). 19.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a ConfigMap that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The ConfigMap name that will be referenced from the Proxy object. 4 The ConfigMap must be in the openshift-config namespace. Create the ConfigMap from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the ConfigMap in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the ConfigMap must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 19.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} status: {} Save the file to apply the changes. Additional resources Replacing the CA Bundle certificate Proxy certificate customization | [
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/enable-cluster-wide-proxy |
Chapter 3. Telco core reference design specifications | Chapter 3. Telco core reference design specifications The telco core reference design specifications (RDS) configures an OpenShift Container Platform cluster running on commodity hardware to host telco core workloads. 3.1. Telco core RDS 4.18 use model overview The telco core reference design specifications (RDS) describes a platform that supports large-scale telco applications, including control plane functions such as signaling and aggregation. It also includes some centralized data plane functions, such as user plane functions (UPF). These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments such as RAN. 3.2. About the telco core cluster use model The telco core cluster use model is designed for clusters that run on commodity hardware. Telco core clusters support large scale telco applications including control plane functions such as signaling, aggregation, and session border controller (SBC); and centralized data plane functions such as 5G user plane functions (UPF). Telco core cluster functions require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge RAN deployments. Figure 3.1. Telco core RDS cluster service-based architecture and networking topology Networking requirements for telco core functions vary widely across a range of networking features and performance points. IPv6 is a requirement and dual-stack is common. Some functions need maximum throughput and transaction rate and require support for user-plane DPDK networking. Other functions use more typical cloud-native patterns and can rely on OVN-Kubernetes, kernel networking, and load balancing. Telco core clusters are configured as standard with three control plane and two or more worker nodes configured with the stock (non-RT) kernel. In support of workloads with varying networking and performance requirements, you can segment worker nodes by using MachineConfigPool custom resources (CR), for example, for non-user data plane or high-throughput use cases. In support of required telco operational features, core clusters have a standard set of Day 2 OLM-managed Operators installed. 3.3. Reference design scope The telco core and telco RAN reference design specifications (RDS) capture the recommended, tested, and supported configurations to get reliable and repeatable performance for clusters running the telco core and telco RAN profiles. Each RDS includes the released features and supported configurations that are engineered and validated for clusters to run the individual profiles. The configurations provide a baseline OpenShift Container Platform installation that meets feature and KPI targets. Each RDS also describes expected variations for each individual configuration. Validation of each RDS includes many long duration and at-scale tests. Note The validated reference configurations are updated for each major Y-stream release of OpenShift Container Platform. Z-stream patch releases are periodically re-tested against the reference configurations. 3.4. Deviations from the reference design Deviating from the validated telco core and telco RAN DU reference design specifications (RDS) can have significant impact beyond the specific component or feature that you change. Deviations require analysis and engineering in the context of the complete solution. Important All deviations from the RDS should be analyzed and documented with clear action tracking information. Due diligence is expected from partners to understand how to bring deviations into line with the reference design. This might require partners to provide additional resources to engage with Red Hat to work towards enabling their use case to achieve a best in class outcome with the platform. This is critical for the supportability of the solution and ensuring alignment across Red Hat and with partners. Deviation from the RDS can have some or all of the following consequences: It can take longer to resolve issues. There is a risk of missing project service-level agreements (SLAs), project deadlines, end provider performance requirements, and so on. Unapproved deviations may require escalation at executive levels. Note Red Hat prioritizes the servicing of requests for deviations based on partner engagement priorities. 3.5. Telco core common baseline model The following configurations and use models are applicable to all telco core use cases. The telco core use cases build on this common baseline of features. Cluster topology Telco core clusters conform to the following requirements: High availability control plane (three or more control plane nodes) Non-schedulable control plane nodes Multiple machine config pools Storage Telco core use cases require persistent storage as provided by Red Hat OpenShift Data Foundation. Networking Telco core cluster networking conforms to the following requirements: Dual stack IPv4/IPv6 (IPv4 primary). Fully disconnected - clusters do not have access to public networking at any point in their lifecycle. Supports multiple networks. Segmented networking provides isolation between operations, administration and maintenance (OAM), signaling, and storage traffic. Cluster network type is OVN-Kubernetes as required for IPv6 support. Telco core clusters have multiple layers of networking supported by underlying RHCOS, SR-IOV Network Operator, Load Balancer and other components. These layers include the following: Cluster networking layer. The cluster network configuration is defined and applied through the installation configuration. Update the configuration during Day 2 operations with the NMState Operator. Use the initial configuration to establish the following: Host interface configuration. Active/active bonding (LACP). Secondary/additional network layer. Configure the OpenShift Container Platform CNI through network additionalNetwork or NetworkAttachmentDefinition CRs. Use the initial configuration to configure MACVLAN virtual network interfaces. Application workload layer. User plane networking runs in cloud-native network functions (CNFs). Service Mesh Telco CNFs can use Service Mesh. All telco core clusters require a Service Mesh implementation. The choice of implementation and configuration is outside the scope of this specification. 3.6. Telco core cluster common use model engineering considerations Cluster workloads are detailed in "Application workloads". Worker nodes should run on either of the following CPUs: Intel 3rd Generation Xeon (IceLake) CPUs or better when supported by OpenShift Container Platform, or CPUs with the silicon security bug (Spectre and similar) mitigations turned off. Skylake and older CPUs can experience 40% transaction performance drops when Spectre and similar mitigations are enabled. AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) or better when supported by OpenShift Container Platform. Note Currently, per-pod power management is not available for AMD CPUs. IRQ balancing is enabled on worker nodes. The PerformanceProfile CR sets globallyDisableIrqLoadBalancing to false. Guaranteed QoS pods are annotated to ensure isolation as described in "CPU partitioning and performance tuning". All cluster nodes should have the following features: Have Hyper-Threading enabled Have x86_64 CPU architecture Have the stock (non-realtime) kernel enabled Are not configured for workload partitioning The balance between power management and maximum performance varies between machine config pools in the cluster. The following configurations should be consistent for all nodes in a machine config pools group. Cluster scaling. See "Scalability" for more information. Clusters should be able to scale to at least 120 nodes. CPU partitioning is configured using a PerformanceProfile CR and is applied to nodes on a per MachineConfigPool basis. See "CPU partitioning and performance tuning" for additional considerations. CPU requirements for OpenShift Container Platform depend on the configured feature set and application workload characteristics. For a cluster configured according to the reference configuration running a simulated workload of 3000 pods as created by the kube-burner node-density test, the following CPU requirements are validated: The minimum number of reserved CPUs for control plane and worker nodes is 2 CPUs (4 hyper-threads) per NUMA node. The NICs used for non-DPDK network traffic should be configured to use at least 16 RX/TX queues. Nodes with large numbers of pods or other resources might require additional reserved CPUs. The remaining CPUs are available for user workloads. Note Variations in OpenShift Container Platform configuration, workload size, and workload characteristics require additional analysis to determine the effect on the number of required CPUs for the OpenShift platform. 3.6.1. Application workloads Application workloads running on telco core clusters can include a mix of high performance cloud-native network functions (CNFs) and traditional best-effort or burstable pod workloads. Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements. Typically, pods that run high performance or latency sensitive CNFs by using user plane networking (for example, DPDK) require exclusive use of dedicated whole CPUs achieved through node tuning and guaranteed QoS scheduling. When creating pod configurations that require exclusive CPUs, be aware of the potential implications of hyper-threaded systems. Pods should request multiples of 2 CPUs when the entire core (2 hyper-threads) must be allocated to the pod. Pods running network functions that do not require high throughput or low latency networking should be scheduled with best-effort or burstable QoS pods and do not require dedicated or isolated CPU cores. Engineering considerations Use the following information to plan telco core workloads and cluster resources: CNF applications should conform to the latest version of Red Hat Best Practices for Kubernetes . Use a mix of best-effort and burstable QoS pods as required by your applications. Use guaranteed QoS pods with proper configuration of reserved or isolated CPUs in the PerformanceProfile CR that configures the node. Guaranteed QoS Pods must include annotations for fully isolating CPUs. Best effort and burstable pods are not guaranteed exclusive CPU use. Workloads can be preempted by other workloads, operating system daemons, or kernel tasks. Use exec probes sparingly and only when no other suitable option is available. Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, httpGet or tcpSocket . When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and the frequency must not be set to less than 10 seconds. You can use startup probes, because they do not use significant resources at steady-state operation. This limitation on exec probes applies primarily to liveness and readiness probes. Exec probes cause much higher CPU usage on management cores compared to other probe types because they require process forking. 3.6.2. Signaling workloads Signaling workloads typically use SCTP, REST, gRPC or similar TCP or UDP protocols. Signaling workloads support hundreds of thousands of transactions per second (TPS) by using a secondary multus CNI configured as MACVLAN or SR-IOV interface. These workloads can run in pods with either guaranteed or burstable QoS. 3.7. Telco core RDS components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco core workloads. 3.7.1. CPU partitioning and performance tuning New in this release No reference design updates in this release Description CPU partitioning improves performance and reduces latency by separating sensitive workloads from general-purpose tasks, interrupts, and driver work queues. The CPUs allocated to those auxiliary processes are referred to as reserved in the following sections. In a system with Hyper-Threading enabled, a CPU is one hyper-thread. Limits and requirements The operating system needs a certain amount of CPU to perform all the support tasks, including kernel networking. A system with just user plane networking applications (DPDK) needs at least one core (2 hyper-threads when enabled) reserved for the operating system and the infrastructure components. In a system with Hyper-Threading enabled, core sibling threads must always be in the same pool of CPUs. The set of reserved and isolated cores must include all CPU cores. Core 0 of each NUMA node must be included in the reserved CPU set. Low latency workloads require special configuration to avoid being affected by interrupts, kernel scheduler, or other parts of the platform. For more information, see "Creating a performance profile". Engineering considerations The minimum reserved capacity ( systemReserved ) required can be found by following the guidance in the Which amount of CPU and memory are recommended to reserve for the system in OpenShift 4 nodes? Knowledgebase article. The actual required reserved CPU capacity depends on the cluster configuration and workload attributes. The reserved CPU value must be rounded up to a full core (2 hyper-threads) alignment. Changes to CPU partitioning cause the nodes contained in the relevant machine config pool to be drained and rebooted. The reserved CPUs reduce the pod density, because the reserved CPUs are removed from the allocatable capacity of the OpenShift Container Platform node. The real-time workload hint should be enabled for real-time capable workloads. Applying the real time workloadHint setting results in the nohz_full kernel command line parameter being applied to improve performance of high performance applications. When you apply the workloadHint setting, any isolated or burstable pods that do not have the cpu-quota.crio.io: "disable" annotation and a proper runtimeClassName value, are subject to CRI-O rate limiting. When you set the workloadHint parameter, be aware of the tradeoff between increased performance and the potential impact of CRI-O rate limiting. Ensure that required pods are correctly annotated. Hardware without IRQ affinity support affects isolated CPUs. All server hardware must support IRQ affinity to ensure that pods with guaranteed CPU QoS can fully use allocated CPUs. OVS dynamically manages its cpuset entry to adapt to network traffic needs. You do not need to reserve an additional CPU for handling high network throughput on the primary CNI. If workloads running on the cluster use kernel level networking, the RX/TX queue count for the participating NICs should be set to 16 or 32 queues if the hardware permits it. Be aware of the default queue count. With no configuration, the default queue count is one RX/TX queue per online CPU; which can result in too many interrupts being allocated. Note Some drivers do not deallocate the interrupts even after reducing the queue count. If workloads running on the cluster require cgroup v1, you can configure nodes to use cgroup v1 as part of the initial cluster deployment. See "Enabling Linux control group version 1 (cgroup v1)" and Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads . Note Support for cgroup v1 is planned for removal in OpenShift Container Platform 4.19. Clusters running cgroup v1 must transition to cgroup v2. Additional resources Creating a performance profile Configuring host firmware for low latency and high performance Enabling Linux cgroup v1 during installation 3.7.2. Service mesh Description Telco core cloud-native functions (CNFs) typically require a service mesh implementation. Specific service mesh features and performance requirements are dependent on the application. The selection of service mesh implementation and configuration is outside the scope of this documentation. You must account for the impact of service mesh on cluster resource usage and performance, including additional latency introduced in pod networking, in your implementation. Additional resources About OpenShift Service Mesh 3.7.3. Networking The following diagram describes the telco core reference design networking configuration. Figure 3.2. Telco core reference design networking configuration New in this release Support for disabling vendor plugins in the SR-IOV Operator New knowledge base article on creating custom node firewall rules Extended telco core RDS validation with MetalLB and EgressIP telco QE validation FRR-K8s is now available under the Cluster Network Operator. Note If you have custom FRRConfiguration CRs in the metallb-system namespace, you must move them under the openshift-network-operator namespace. Description The cluster is configured for dual-stack IP (IPv4 and IPv6). The validated physical network configuration consists of two dual-port NICs. One NIC is shared among the primary CNI (OVN-Kubernetes) and IPVLAN and MACVLAN traffic, while the second one is dedicated to SR-IOV VF-based pod traffic. A Linux bonding interface ( bond0 ) is created in active-active IEEE 802.3ad LACP mode with the two NIC ports attached. The top-of-rack networking equipment must support and be configured for multi-chassis link aggregation (mLAG) technology. VLAN interfaces are created on top of bond0 , including for the primary CNI. Bond and VLAN interfaces are created at cluster install time during the network configuration stage of the installation. Except for the vlan0 VLAN used by the primary CNI, all other VLANs can be created during Day 2 activities with the Kubernetes NMstate Operator. MACVLAN and IPVLAN interfaces are created with their corresponding CNIs. They do not share the same base interface. For more information, see "Cluster Network Operator". SR-IOV VFs are managed by the SR-IOV Network Operator. To ensure consistent source IP addresses for pods behind a LoadBalancer Service, configure an EgressIP CR and specify the podSelector parameter. You can implement service traffic separation by doing the following: Configure VLAN interfaces and specific kernel IP routes on the nodes using NodeNetworkConfigurationPolicy CRs. Create a MetalLB BGPPeer CR for each VLAN to establish peering with the remote BGP router. Define a MetalLB BGPAdvertisement CR to specify which IP address pools should be advertised to a selected list of BGPPeer resources. The following diagram illustrates how specific service IP addresses are advertised to the outside via specific VLAN interfaces. Services routes are defined in BGPAdvertisement CRs and configured with values for IPAddressPool1 and BGPPeer1 fields. Figure 3.3. Telco core reference design MetalLB service separation Additional resources Understanding networking 3.7.3.1. Cluster Network Operator New in this release No reference design updates in this release Description The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during cluster installation. The CNO allows for configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. In support of network traffic separation, multiple network interfaces are configured through the CNO. Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the routingViaHost option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. Limits and requirements OVN-Kubernetes is required for IPv6 support. Large MTU cluster support requires connected network equipment to be set to the same or larger value. MTU size up to 8900 is supported. MACVLAN and IPVLAN cannot co-locate on the same main interface due to their reliance on the same underlying kernel mechanism, specifically the rx_handler . This handler allows a third-party module to process incoming packets before the host processes them, and only one such handler can be registered per network interface. Since both MACVLAN and IPVLAN need to register their own rx_handler to function, they conflict and cannot coexist on the same interface. Review the source code for more details: linux/v6.10.2/source/drivers/net/ipvlan/ipvlan_main.c#L82 linux/v6.10.2/source/drivers/net/macvlan.c#L1260 Alternative NIC configurations include splitting the shared NIC into multiple NICs or using a single dual-port NIC, though they have not been tested and validated. Clusters with single-stack IP configuration are not validated. The reachabilityTotalTimeoutSeconds parameter in the Network CR configures the EgressIP node reachability check total timeout in seconds. The recommended value is 1 second. Engineering considerations Pod egress traffic is handled by kernel routing table using the routingViaHost option. Appropriate static routes must be configured in the host. Additional resources Cluster Network Operator 3.7.3.2. Load balancer New in this release FRR-K8s is now available under the Cluster Network Operator. Important If you have custom FRRConfiguration CRs in the metallb-system namespace, you must move them under the openshift-network-operator namespace. Description MetalLB is a load-balancer implementation for bare metal Kubernetes clusters that uses standard routing protocols. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. The MetalLB Operator deploys and manages the lifecycle of a MetalLB instance in a cluster. Some use cases might require features not available in MetalLB, such as stateful load balancing. Where necessary, you can use an external third party load balancer. Selection and configuration of an external load balancer is outside the scope of this specification. When an external third-party load balancer is used, the integration effort must include enough analysis to ensure all performance and resource utilization requirements are met. Limits and requirements Stateful load balancing is not supported by MetalLB. An alternate load balancer implementation must be used if this is a requirement for workload CNFs. You must ensure that the external IP address is routable from clients to the host network for the cluster. Engineering considerations MetalLB is used in BGP mode only for telco core use models. For telco core use models, MetalLB is supported only with the OVN-Kubernetes network provider used in local gateway mode. See routingViaHost in "Cluster Network Operator". BGP configuration in MetalLB is expected to vary depending on the requirements of the network and peers. You can configure address pools with variations in addresses, aggregation length, auto assignment, and so on. MetalLB uses BGP for announcing routes only. Only the transmitInterval and minimumTtl parameters are relevant in this mode. Other parameters in the BFD profile should remain close to the defaults as shorter values can lead to false negatives and affect performance. Additional resources When to use MetalLB 3.7.3.3. SR-IOV New in this release You can now create virtual functions for Mellanox NICs with the SR-IOV Network Operator when secure boot is enabled in the cluster host. Before you can create the virtual functions, you must first skip the firmware configuration for the Mellanox NIC and manually allocate the number of virtual functions in the firmware before switching the system to secure boot. Description SR-IOV enables physical functions (PFs) to be divided into multiple virtual functions (VFs). VFs can then be assigned to multiple pods to achieve higher throughput performance while keeping the pods isolated. The SR-IOV Network Operator provisions and manages SR-IOV CNI, network device plugin, and other components of the SR-IOV stack. Limits and requirements Only certain network interfaces are supported. See "Supported devices" for more information. Enabling SR-IOV and IOMMU: the SR-IOV Network Operator automatically enables IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from the PF. If a link down detection is required, it must be done at the protocol level. MultiNetworkPolicy CRs can be applied to netdevice networks only. This is because the implementation uses iptables, which cannot manage vfio interfaces. Engineering considerations SR-IOV interfaces in vfio mode are typically used to enable additional secondary networks for applications that require high throughput or low latency. The SriovOperatorConfig CR must be explicitly created. This CR is included in the reference configuration policies, which causes it to be created during initial deployment. NICs that do not support firmware updates with UEFI secure boot or kernel lockdown must be preconfigured with sufficient virtual functions (VFs) enabled to support the number of VFs required by the application workload. For Mellanox NICs, you must disable the Mellanox vendor plugin in the SR-IOV Network Operator. See "Configuring an SR-IOV network device" for more information. To change the MTU value of a VF after the pod has started, do not configure the SriovNetworkNodePolicy MTU field. Instead, use the Kubernetes NMState Operator to set the MTU of the related PF. Additional resources About Single Root I/O Virtualization (SR-IOV) hardware networks Supported devices Configuring the SR-IOV Network Operator on Mellanox cards when Secure Boot is enabled 3.7.3.4. NMState Operator New in this release No reference design updates in this release Description The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across cluster nodes. It enables network interface configurations, static IPs and DNS, VLANs, trunks, bonding, static routes, MTU, and enabling promiscuous mode on the secondary interfaces. The cluster nodes periodically report on the state of each node's network interfaces to the API server. Limits and requirements Not applicable Engineering considerations Initial networking configuration is applied using NMStateConfig content in the installation CRs. The NMState Operator is used only when required for network updates. When SR-IOV virtual functions are used for host networking, the NMState Operator (via nodeNetworkConfigurationPolicy CRs) is used to configure VF interfaces, such as VLANs and MTU. Additional resources Kubernetes NMState Operator 3.7.4. Logging New in this release No reference design updates in this release Description The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration uses Kafka to ship audit and infrastructure logs to a remote archive. Limits and requirements Not applicable Engineering considerations The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured. The reference configuration does not include shipping of application logs. The inclusion of application logs in the configuration requires you to evaluate the application logging rate and have sufficient additional CPU resources allocated to the reserved set. Additional resources Logging 6.0 3.7.5. Power Management New in this release No reference design updates in this release Description Use the Performance Profile to configure clusters with high power mode, low power mode, or mixed mode. The choice of power mode depends on the characteristics of the workloads running on the cluster, particularly how sensitive they are to latency. Configure the maximum latency for a low-latency pod by using the per-pod power management C-states feature. Limits and requirements Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors. Engineering considerations Latency: To ensure that latency-sensitive workloads meet requirements, you require a high-power or a per-pod power management configuration. Per-pod power management is only available for Guaranteed QoS pods with dedicated pinned CPUs. Additional resources performance.openshift.io/v2 API reference Configuring power saving for nodes Configuring power saving for nodes that run colocated high and low priority workloads 3.7.6. Storage New in this release No reference design updates in this release Description Cloud native storage services can be provided by Red Hat OpenShift Data Foundation or other third-party solutions. OpenShift Data Foundation is a Ceph-based software-defined storage solution for containers. It provides block storage, file system storage, and on-premise object storage, which can be dynamically provisioned for both persistent and non-persistent data requirements. Telco core applications require persistent storage. Note All storage data might not be encrypted in flight. To reduce risk, isolate the storage network from other cluster networks. The storage network must not be reachable, or routable, from other cluster networks. Only nodes directly attached to the storage network should be allowed to gain access to it. Additional resources Red Hat OpenShift Data Foundation 3.7.6.1. Red Hat OpenShift Data Foundation New in this release No reference design updates in this release Description Red Hat OpenShift Data Foundation is a software-defined storage service for containers. For telco core clusters, storage support is provided by OpenShift Data Foundation storage services running externally to the application workload cluster. OpenShift Data Foundation supports separation of storage traffic using secondary CNI networks. Limits and requirements In an IPv4/IPv6 dual-stack networking environment, OpenShift Data Foundation uses IPv4 addressing. For more information, see Network requirements . Engineering considerations OpenShift Data Foundation network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. 3.7.6.2. Additional storage solutions You can use other storage solutions to provide persistent storage for telco core clusters. The configuration and integration of these solutions is outside the scope of the reference design specifications (RDS). Integration of the storage solution into the telco core cluster must include proper sizing and performance analysis to ensure the storage meets overall performance and resource usage requirements. 3.7.7. Telco core deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with Red Hat Advanced Cluster Management (RHACM). 3.7.7.1. Red Hat Advanced Cluster Management New in this release No reference design updates in this release Description Red Hat Advanced Cluster Management (RHACM) provides Multi Cluster Engine (MCE) installation and ongoing GitOps ZTP lifecycle management for deployed clusters. You manage cluster configuration and upgrades declaratively by applying Policy custom resources (CRs) to clusters during maintenance windows. You apply policies with the RHACM policy controller as managed by Topology Aware Lifecycle Manager. Configuration, upgrades, and cluster status are managed through the policy controller. When installing managed clusters, RHACM applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools. You define these configurations with SiteConfig or ClusterInstance CRs. Limits and requirements Hub cluster sizing is discussed in Sizing your cluster . RHACM scaling limits are described in Performance and Scalability . Engineering considerations When managing multiple clusters with unique content per installation, site, or deployment, using RHACM hub templating is strongly recommended. RHACM hub templating allows you to apply a consistent set of policies to clusters while providing for unique values per installation. Additional resources Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management for Kubernetes 3.7.7.2. Topology Aware Lifecycle Manager New in this release No reference design updates in this release. Description Topology Aware Lifecycle Manager is an Operator which runs only on the hub cluster. TALM manages how changes including cluster and Operator upgrades, configurations, and so on, are rolled out to managed clusters in the network. TALM has the following core features: Provides sequenced updates of cluster configurations and upgrades (OpenShift Container Platform and Operators) as defined by cluster policies. Provides for deferred application of cluster updates. Supports progressive rollout of policy updates to sets of clusters in user configurable batches. Allows for per-cluster actions by adding ztp-done or similar user-defined labels to clusters. Limits and requirements Supports concurrent cluster deployments in batches of 400. Engineering considerations Only policies with the ran.openshift.io/ztp-deploy-wave annotation are applied by TALM during initial cluster installation. Any policy can be remediated by TALM under control of a user created ClusterGroupUpgrade CR. Additional resources Updating managed clusters with the Topology Aware Lifecycle Manager 3.7.7.3. GitOps Operator and GitOps ZTP plugins New in this release No reference design updates in this release Description The GitOps Operator provides a GitOps driven infrastructure for managing cluster deployment and configuration. Cluster definitions and configuration are maintained in a Git repository. ZTP plugins provide support for generating Installation CRs from SiteConfig CRs and automatically wrapping configuration CRs in policies based on RHACM PolicyGenerator CRs. The SiteConfig Operator provides improved support for generation of Installation CRs from ClusterInstance CRs. Important Where possible, use ClusterInstance CRs for cluster installation instead of the SiteConfig with GitOps ZTP plugin method. You should structure the Git repository according to release version, with all necessary artifacts ( SiteConfig , ClusterInstance , PolicyGenerator , and PolicyGenTemplate , and supporting reference CRs) included. This enables deploying and managing multiple versions of the OpenShift platform and configuration versions to clusters simultaneously and through upgrades. The recommended Git structure keeps reference CRs in a directory separate from customer or partner provided content. This means that you can import reference updates by simply overwriting existing content. Customer or partner-supplied CRs can be provided in a parallel directory to the reference CRs for easy inclusion in the generated configuration policies. Limits and requirements Each ArgoCD application supports up to 300 nodes. Multiple ArgoCD applications can be used to achieve the maximum number of clusters supported by a single hub cluster. The SiteConfig CR must use the extraManifests.searchPaths field to reference the reference manifests. Note Since OpenShift Container Platform 4.15, the spec.extraManifestPath field is deprecated. Engineering considerations Set the MachineConfigPool ( mcp ) CR paused field to true during a cluster upgrade maintenance window and set the maxUnavailable field to the maximum tolerable value. This prevents multiple cluster node reboots during upgrade, which results in a shorter overall upgrade. When you unpause the mcp CR, all the configuration changes are applied with a single reboot. Note During installation, custom mcp CRs can be paused along with setting maxUnavailable to 100% to improve installation times. To avoid confusion or unintentional overwriting when updating content, you should use unique and distinguishable names for custom CRs in the reference-crs/ directory under core-overlay and extra manifests in Git. The SiteConfig CR allows multiple extra-manifest paths. When file names overlap in multiple directory paths, the last file found in the directory order list takes precedence. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline 3.7.7.4. Monitoring New in this release No reference design updates in this release Description The Cluster Monitoring Operator (CMO) is included by default in OpenShift Container Platform and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects. You can customize the default log retention period, custom alert rules, and so on. The default handling of pod CPU and memory metrics, based on upstream Kubernetes and cAdvisor, makes a tradeoff favoring stale data over metric accuracy. This leads to spikes in reporting, which can create false alerts, depending on the user-specified thresholds. OpenShift Container Platform supports an opt-in Dedicated Service Monitor feature that creates an additional set of pod CPU and memory metrics that do not suffer from this behavior. For more information, see Dedicated Service Monitors - Questions and Answers (Red Hat Knowledgebase) . In addition to the default configuration, the following metrics are expected to be configured for telco core clusters: Pod CPU and memory metrics and alerts for user workloads Limits and requirements You must enable the Dedicated Service Monitor feature to represent pod metrics accurately. Engineering considerations The Prometheus retention period is specified by the user. The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. Longer retention periods increase the need for storage and require additional CPU to manage data indexing. Additional resources About OpenShift Container Platform monitoring 3.7.8. Scheduling New in this release No reference design updates in this release Description The scheduler is a cluster-wide component responsible for selecting the correct node for a given workload. It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. However, a few specific use cases are described in the following section. NUMA-aware scheduling can be enabled through the NUMA Resources Operator. For more information, see "Scheduling NUMA-aware workloads". Limits and requirements The default scheduler does not understand the NUMA locality of workloads. It only knows about the sum of all free resources on a worker node. This might cause workloads to be rejected when scheduled to a node with the topology manager policy set to single-numa-node or restricted . For more information, see "Topology Manager policies". For example, consider a pod requesting 6 CPUs that is scheduled to an empty node that has 4 CPUs per NUMA node. The total allocatable capacity of the node is 8 CPUs. The scheduler places the pod on the empty node. The node local admission fails, as there are only 4 CPUs available in each of the NUMA nodes. All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator. See "Installing the NUMA Resources Operator" for more information. Use the machineConfigPoolSelector field in the KubeletConfig CR to select all nodes where NUMA aligned scheduling is required. All machine config pools must have consistent hardware configuration. For example, all nodes are expected to have the same NUMA zone count. Engineering considerations Pods might require annotations for correct scheduling and isolation. For more information about annotations, see "CPU partitioning and performance tuning". You can configure SR-IOV virtual function NUMA affinity to be ignored during scheduling by using the excludeTopology field in SriovNetworkNodePolicy CR. Additional resources Installing the NUMA Resources Operator Scheduling NUMA-aware workloads Topology Manager policies 3.7.9. Node Configuration New in this release No reference design updates in this release Limits and requirements Analyze additional kernel modules to determine impact on CPU load, system performance, and ability to meet KPIs. Table 3.1. Additional kernel modules Feature Description Additional kernel modules Install the following kernel modules by using MachineConfig CRs to provide extended kernel functionality to CNFs. sctp ip_gre ip6_tables ip6t_REJECT ip6table_filter ip6table_mangle iptable_filter iptable_mangle iptable_nat xt_multiport xt_owner xt_REDIRECT xt_statistic xt_TCPMSS Container mount namespace hiding Reduce the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. Creates a container mount namespace, visible to kubelet/CRI-O, to reduce system mount scanning overhead. Kdump enable Optional configuration (enabled by default) Additional resources Automatic kernel crash dumps with kdump Optimizing CPU usage with mount namespace encapsulation 3.7.10. Host firmware and boot loader configuration New in this release No reference design updates in this release Engineering considerations Enabling secure boot is the recommended configuration. Note When secure boot is enabled, only signed kernel modules are loaded by the kernel. Out-of-tree drivers are not supported. 3.7.11. Disconnected environment New in this release No reference design updates in this release Descrption Telco core clusters are expected to be installed in networks without direct access to the internet. All container images needed to install, configure, and operate the cluster must be available in a disconnected registry. This includes OpenShift Container Platform images, Day 2 OLM Operator images, and application workload images. The use of a disconnected environment provides multiple benefits, including: Security - limiting access to the cluster Curated content - the registry is populated based on curated and approved updates for clusters Limits and requirements A unique name is required for all custom CatalogSource resources. Do not reuse the default catalog names. Engineering considerations A valid time source must be configured as part of cluster installation Additional resources About cluster updates in a disconnected environment 3.7.12. Agent-based Installer New in this release No reference design updates in this release Description Telco core clusters can be installed by using the Agent-based Installer. This method allows you to install OpenShift on bare-metal servers without requiring additional servers or VMs for managing the installation. The Agent-based Installer can be run on any system (for example, from a laptop) to generate an ISO installation image. The ISO is used as the installation media for the cluster supervisor nodes. Installation progress can be monitored using the ABI tool from any system with network connectivity to the supervisor node's API interfaces. ABI supports the following: Installation from declarative CRs Installation in disconnected environments Installation with no additional supporting install or bastion servers required to complete the installation Limits and requirements Disconnected installation requires a registry that is reachable from the installed host, with all required content mirrored in that registry. Engineering considerations Networking configuration should be applied as NMState configuration during installation. Day 2 networking configuration using the NMState Operator is not supported. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.7.13. Security New in this release New knowledgebase article on creating custom node firewall rules Description Telco customers are security conscious and require clusters to be hardened against multiple attack vectors. In OpenShift Container Platform, there is no single component or feature responsible for securing a cluster. Use the following security-oriented features and configurations to secure your clusters: SecurityContextConstraints (SCC) : All workload pods should be run with restricted-v2 or restricted SCC. Seccomp : All pods should run with the RuntimeDefault (or stronger) seccomp profile. Rootless DPDK pods : Many user-plane networking (DPDK) CNFs require pods to run with root privileges. With this feature, a conformant DPDK pod can run without requiring root privileges. Rootless DPDK pods create a tap device in a rootless pod that injects traffic from a DPDK application to the kernel. Storage : The storage network should be isolated and non-routable to other cluster networks. See the "Storage" section for additional details. Refer to Custom nftable firewall rules in OpenShift for a supported method of implementing custom nftables firewall rules in OpenShift cluster nodes. This article is intended for cluster administrators who are responsible for managing network security policies in OpenShift environments. It is crucial to carefully consider the operational implications before deploying this method, including: Early application : The rules are applied at boot time, before the network is fully operational. Ensure the rules don't inadvertently block essential services required during the boot process. Risk of misconfiguration : Errors in your custom rules can lead to unintended consequences, potentially leading to performance impact or blocking legitimate traffic or isolating nodes. Thoroughly test your rules in a non-production environment before deploying them to your main cluster. External endpoints : OpenShift requires access to external endpoints to function. For more information about the firewall allowlist, see "Configuring your firewall for OpenShift Container Platform". Ensure that cluster nodes are permitted access to those endpoints. Node reboot : Unless node disruption policies are configured, applying the MachineConfig CR with the required firewall settings causes a node reboot. Be aware of this impact and schedule a maintenance window accordingly. For more information, see "Using node disruption policies to minimize disruption from machine config changes". Note Node disruption policies are available in OpenShift Container Platform 4.17 and later. Network flow matrix : For more information about managing ingress traffic, see "OpenShift Container Platform network flow matrix". You can restrict ingress traffic to essential flows to improve network security. The matrix provides insights into base cluster services but excludes traffic generated by Day-2 Operators. Cluster version updates and upgrades : Exercise caution when updating or upgrading OpenShift clusters. Recent changes to the platform's firewall requirements might require adjustments to network port permissions. Although the documentation provides guidelines, note that these requirements can evolve over time. To minimize disruptions, you should test any updates or upgrades in a staging environment before applying them in production. This helps you to identify and address potential compatibility issues related to firewall configuration changes. Limits and requirements Rootless DPDK pods requires the following additional configuration: Configure the container_t SELinux context for the tap plugin. Enable the container_use_devices SELinux boolean for the cluster host. Engineering considerations For rootless DPDK pod support, enable the SELinux container_use_devices boolean on the host to allow the tap device to be created. This introduces an acceptable security risk. Additional resources Configuring your firewall for OpenShift Container Platform OpenShift Container Platform network flow matrix Managing security context constraints Using node disruption policies to minimize disruption from machine config changes 3.7.14. Scalability New in this release No reference design updates in this release Description Scaling of workloads is described in "Application workloads". Limits and requirements Cluster can scale to at least 120 nodes. 3.8. Telco core reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. 3.8.1. Extracting the telco core reference design configuration CRs You can extract the complete set of custom resources (CRs) for the telco core profile from the telco-core-rds-rhel9 container image. The container image has both the required CRs, and the optional CRs, for the telco core profile. Prerequisites You have installed podman . Procedure Extract the content from the telco-core-rds-rhel9 container image by running the following commands: USD mkdir -p ./out USD podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.18 | base64 -d | tar xv -C out Verification The out directory has the following directory structure. You can view the telco core CRs in the out/telco-core-rds/ directory. Example output out/ └── telco-core-rds ├── configuration │ └── reference-crs │ ├── optional │ │ ├── logging │ │ ├── networking │ │ │ └── multus │ │ │ └── tap_cni │ │ ├── other │ │ └── tuning │ └── required │ ├── networking │ │ ├── metallb │ │ ├── multinetworkpolicy │ │ └── sriov │ ├── other │ ├── performance │ ├── scheduling │ └── storage │ └── odf-external └── install Prerequisites You have access to the cluster as a user with the cluster-admin role. You have credentials to access the registry.redhat.io container image registry. You installed the cluster-compare plugin. Procedure Login to the container image registry with your credentials by running the following command: USD podman login registry.redhat.io Additional resources Understanding the cluster-compare plugin 3.8.2. Node configuration reference CRs Table 3.2. Node configuration CRs Component Reference CR Description Optional Additional kernel modules control-plane-load-kernel-modules.yaml Optional. Configures the kernel modules for control plane nodes. No Additional kernel modules sctp_module_mc.yaml Optional. Loads the SCTP kernel module in worker nodes. No Additional kernel modules worker-load-kernel-modules.yaml Optional. Configures kernel modules for worker nodes. No Container mount namespace hiding mount_namespace_config_master.yaml Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on control plane nodes. No Container mount namespace hiding mount_namespace_config_worker.yaml Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on worker nodes. No Kdump enable kdump-master.yaml Configures kdump crash reporting on master nodes. No Kdump enable kdump-worker.yaml Configures kdump crash reporting on worker nodes. No 3.8.3. Resource tuning reference CRs Table 3.3. Resource tuning CRs Component Reference CR Description Optional System reserved capacity control-plane-system-reserved.yaml Optional. Configures kubelet, enabling auto-sizing reserved resources for the control plane node pool. No 3.8.4. Networking reference CRs Table 3.4. Networking CRs Component Reference CR Description Optional Baseline Network.yaml Configures the default cluster network, specifying OVN Kubernetes settings like routing via the host. It also allows the definition of additional networks, including custom CNI configurations, and enables the use of MultiNetworkPolicy CRs for network policies across multiple networks. No Baseline networkAttachmentDefinition.yaml Optional. Defines a NetworkAttachmentDefinition resource specifying network configuration details such as node selector and CNI configuration. Yes Load Balancer addr-pool.yaml Configures MetalLB to manage a pool of IP addresses with auto-assign enabled for dynamic allocation of IPs from the specified range. No Load Balancer bfd-profile.yaml Configures bidirectional forwarding detection (BFD) with customized intervals, detection multiplier, and modes for quicker network fault detection and load balancing failover. No Load Balancer bgp-advr.yaml Defines a BGP advertisement resource for MetalLB, specifying how an IP address pool is advertised to BGP peers. This enables fine-grained control over traffic routing and announcements. No Load Balancer bgp-peer.yaml Defines a BGP peer in MetalLB, representing a BGP neighbor for dynamic routing. No Load Balancer community.yaml Defines a MetalLB community, which groups one or more BGP communities under a named resource. Communities can be applied to BGP advertisements to control routing policies and change traffic routing. No Load Balancer metallb.yaml Defines the MetalLB resource in the cluster. No Load Balancer metallbNS.yaml Defines the metallb-system namespace in the cluster. No Load Balancer metallbOperGroup.yaml Defines the Operator group for the MetalLB Operator. No Load Balancer metallbSubscription.yaml Creates a subscription resource for the metallb Operator with manual approval for install plans. No Multus - Tap CNI for rootless DPDK pods mc_rootless_pods_selinux.yaml Configures a MachineConfig resource which sets an SELinux boolean for the tap CNI plugin on worker nodes. Yes NMState Operator NMState.yaml Defines an NMState resource that is used by the NMState Operator to manage node network configurations. No NMState Operator NMStateNS.yaml Creates the NMState Operator namespace. No NMState Operator NMStateOperGroup.yaml Creates the Operator group in the openshift-nmstate namespace, allowing the NMState Operator to watch and manage resources. No NMState Operator NMStateSubscription.yaml Creates a subscription for the NMState Operator, managed through OLM. No SR-IOV Network Operator sriovNetwork.yaml Defines an SR-IOV network specifying network capabilities, IP address management (ipam), and the associated network namespace and resource. No SR-IOV Network Operator sriovNetworkNodePolicy.yaml Configures network policies for SR-IOV devices on specific nodes, including customization of device selection, VF allocation (numVfs), node-specific settings (nodeSelector), and priorities. No SR-IOV Network Operator SriovOperatorConfig.yaml Configures various settings for the SR-IOV Operator, including enabling the injector and Operator webhook, disabling pod draining, and defining the node selector for the configuration daemon. No SR-IOV Network Operator SriovSubscription.yaml Creates a subscription for the SR-IOV Network Operator, managed through OLM. No SR-IOV Network Operator SriovSubscriptionNS.yaml Creates the SR-IOV Network Operator subscription namespace. No SR-IOV Network Operator SriovSubscriptionOperGroup.yaml Creates the Operator group for the SR-IOV Network Operator, allowing it to watch and manage resources in the target namespace. No 3.8.5. Scheduling reference CRs Table 3.5. Scheduling CRs Component Reference CR Description Optional NUMA-aware scheduler nrop.yaml Enables the NUMA Resources Operator, aligning workloads with specific NUMA node configurations. Required for clusters with multi-NUMA nodes. No NUMA-aware scheduler NROPSubscription.yaml Creates a subscription for the NUMA Resources Operator, managed through OLM. Required for clusters with multi-NUMA nodes. No NUMA-aware scheduler NROPSubscriptionNS.yaml Creates the NUMA Resources Operator subscription namespace. Required for clusters with multi-NUMA nodes. No NUMA-aware scheduler NROPSubscriptionOperGroup.yaml Creates the Operator group in the numaresources-operator namespace, allowing the NUMA Resources Operator to watch and manage resources. Required for clusters with multi-NUMA nodes. No NUMA-aware scheduler sched.yaml Configures a topology-aware scheduler in the cluster that can handle NUMA aware scheduling of pods across nodes. No NUMA-aware scheduler Scheduler.yaml Configures control plane nodes as non-schedulable for workloads. No 3.8.6. Storage reference CRs Table 3.6. Storage CRs Component Reference CR Description Optional External ODF configuration 01-rook-ceph-external-cluster-details.secret.yaml Defines a Secret resource containing base64-encoded configuration data for an external Ceph cluster in the openshift-storage namespace. No External ODF configuration 02-ocs-external-storagecluster.yaml Defines an OpenShift Container Storage (OCS) storage resource which configures the cluster to use an external storage back end. No External ODF configuration odfNS.yaml Creates the monitored openshift-storage namespace for the OpenShift Data Foundation Operator. No External ODF configuration odfOperGroup.yaml Creates the Operator group in the openshift-storage namespace, allowing the OpenShift Data Foundation Operator to watch and manage resources. No External ODF configuration odfSubscription.yaml Creates the subscription for the OpenShift Data Foundation Operator in the openshift-storage namespace. No 3.9. Telco core reference configuration software specifications The Red Hat telco core 4.18 solution has been validated using the following Red Hat software products for OpenShift Container Platform clusters. Table 3.7. Telco core cluster validated software components Component Software version Red Hat Advanced Cluster Management (RHACM) 2.12 1 Cluster Logging Operator 6.1 2 OpenShift Data Foundation 4.18 SR-IOV Network Operator 4.18 MetalLB 4.18 NMState Operator 4.18 NUMA-aware scheduler 4.18 [1] This table will be updated when the aligned RHACM version 2.13 is released. [2] This table will be updated when the aligned Cluster Logging Operator 6.2 is released. | [
"mkdir -p ./out",
"podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.18 | base64 -d | tar xv -C out",
"out/ └── telco-core-rds ├── configuration │ └── reference-crs │ ├── optional │ │ ├── logging │ │ ├── networking │ │ │ └── multus │ │ │ └── tap_cni │ │ ├── other │ │ └── tuning │ └── required │ ├── networking │ │ ├── metallb │ │ ├── multinetworkpolicy │ │ └── sriov │ ├── other │ ├── performance │ ├── scheduling │ └── storage │ └── odf-external └── install",
"podman login registry.redhat.io"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/telco-core-ref-design-specs |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 3. Red Hat build of OpenJDK 11.0.20.1 release notes | Chapter 3. Red Hat build of OpenJDK 11.0.20.1 release notes Review the following release notes for an overview of the changes from the Red Hat build of OpenJDK 11.0.20.1 patch release. Note For all the other changes and security fixes, see OpenJDK 11.0.20.1 Released . Fixed Invalid CEN header error on valid .zip files Red Hat build of OpenJDK 11.0.20 introduced additional validation checks on the ZIP64 fields of .zip files (JDK-8302483). However, these additional checks caused validation failures on some valid .zip files with the following error message: Invalid CEN header (invalid zip64 extra data field size) . To fix this issue, Red Hat build of OpenJDK 11.0.20.1 supports zero-length headers and the additional padding that some ZIP64 creation tools produce. From Red Hat build of OpenJDK 11.0.20 onward, you can disable these checks by setting the jdk.util.zip.disableZip64ExtraFieldValidation system property to true . See JDK-8313765 (JDK Bug System) Increased default value of jdk.jar.maxSignatureFileSize system property Red Hat build of OpenJDK 11.0.20 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file ( JDK-8300596 ). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files. Red Hat build of OpenJDK 11.0.20.1 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8313216 (JDK Bug System) Fixed NullPointerException when handling null addresses In Red Hat build of OpenJDK 11.0.20, when the serviceability agent encountered null addresses while generating thread dumps, the serviceability agent produced a NullPointerException . Red Hat build of OpenJDK 11.0.20.1 handles null addresses appropriately. See JDK-8243210 (JDK Bug System) Advisories related to Red Hat build of OpenJDK 11.0.20.1 The following advisories have been issued about bug fixes and CVE fixes included in this release: RHBA-2023:5225 RHBA-2023:5227 RHBA-2023:5229 | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/openjdk-11-0-20-1-release-notes_openjdk |
Chapter 1. Introduction | Chapter 1. Introduction Monitoring tools are an optional suite of tools designed to help operators maintain an OpenStack environment. The tools perform the following functions: Centralized logging: Allows you gather logs from all components in the OpenStack environment in one central location. You can identify problems across all nodes and services, and optionally, export the log data to Red Hat for assistance in diagnosing problems. Availability monitoring: Allows you to monitor all components in the OpenStack environment and determine if any components are currently experiencing outages or are otherwise not functional. You can also configure the system to alert you when problems are identified. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/monitoring_tools_configuration_guide/sect-introduction |
A.4. Reserved Keywords For Future Use | A.4. Reserved Keywords For Future Use ALLOCATE ARE ARRAY ASENSITIVE ASYMETRIC AUTHORIZATION BINARY CALLED CASCADED CHARACTER CHECK CLOSE COLLATE COMMIT CONNECT CORRESPONDING CRITERIA CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR CYCLE DATALINK DEALLOCATE DEC DEREF DESCRIBE DETERMINISTIC DISCONNECT DLNEWCOPY DLPREVIOUSCOPY DLURLCOMPLETE DLURLCOMPLETEONLY DLURLCOMPLETEWRITE DLURLPATH DLURLPATHONLY DLURLPATHWRITE DLURLSCHEME DLURLSERVER DLVALUE DYNAMIC ELEMENT EXTERNAL FREE GET GLOBAL GRANT HAS HOLD IDENTITY IMPORT INDICATOR INPUT INSENSITIVE INT INTERVAL ISOLATION LARGE LOCALTIME LOCALTIMESTAMP MATCH MEMBER METHOD MODIFIES MODULE MULTISET NATIONAL NATURAL NCHAR NCLOB NEW NONE NUMERIC OLD OPEN OUTPUT OVERLAPS PRECISION PREPARE RANGE READS RECURSIVE REFERENCING RELEASE REVOKE ROLLBACK ROLLUP SAVEPOINT SCROLL SEARCH SENSITIVE SESSION_USER SPECIFIC SPECIFICTYPE SQL START STATIC SUBMULTILIST SYMETRIC SYSTEM SYSTEM_USER TIMEZONE_HOUR TIMEZONE_MINUTE TRANSLATION TREAT VALUE VARYING WHENEVER WINDOW WITHIN XMLBINARY XMLCAST XMLDOCUMENT XMLEXISTS XMLITERATE XMLTEXT XMLVALIDATE | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/Reserved_Keywords_For_Future_Use |
6.5. Shadow Passwords | 6.5. Shadow Passwords In multiuser environments it is very important to use shadow passwords (provided by the shadow-utils package). Doing so enhances the security of system authentication files. For this reason, the installation program enables shadow passwords by default. The following lists the advantages pf shadow passwords have over the traditional way of storing passwords on UNIX-based systems: Improves system security by moving encrypted password hashes from the world-readable /etc/passwd file to /etc/shadow , which is readable only by the root user. Stores information about password aging. Allows the use the /etc/login.defs file to enforce security policies. Most utilities provided by the shadow-utils package work properly whether or not shadow passwords are enabled. However, since password aging information is stored exclusively in the /etc/shadow file, any commands which create or modify password aging information do not work. The following is a list of commands which do not work without first enabling shadow passwords: chage gpasswd /usr/sbin/usermod -e or -f options /usr/sbin/useradd -e or -f options | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-users-groups-shadow-utilities |
Chapter 18. StatefulSet [apps/v1] | Chapter 18. StatefulSet [apps/v1] Description StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. Type object 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object A StatefulSetSpec is the specification of a StatefulSet. status object StatefulSetStatus represents the current state of a StatefulSet. 18.1.1. .spec Description A StatefulSetSpec is the specification of a StatefulSet. Type object Required selector template serviceName Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) persistentVolumeClaimRetentionPolicy object StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. podManagementPolicy string podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady , where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once. Possible enum values: - "OrderedReady" will create pods in strictly increasing order on scale up and strictly decreasing order on scale down, progressing only when the pod is ready or terminated. At most one pod will be changed at any time. - "Parallel" will create and delete pods as soon as the stateful set replica count is changed, and will not wait for pods to be ready or complete termination. replicas integer replicas is the desired number of replicas of the given Template. These are replicas in the sense that they are instantiations of the same Template, but individual replicas also have a consistent identity. If unspecified, defaults to 1. revisionHistoryLimit integer revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10. selector LabelSelector selector is a label query over pods that should match the replica count. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors serviceName string serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller. template PodTemplateSpec template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. updateStrategy object StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. volumeClaimTemplates array (PersistentVolumeClaim) volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name. 18.1.2. .spec.persistentVolumeClaimRetentionPolicy Description StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. Type object Property Type Description whenDeleted string WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of Retain causes PVCs to not be affected by StatefulSet deletion. The Delete policy causes those PVCs to be deleted. whenScaled string WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of Retain causes PVCs to not be affected by a scaledown. The Delete policy causes the associated PVCs for any excess pods above the replica count to be deleted. 18.1.3. .spec.updateStrategy Description StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. Type object Property Type Description rollingUpdate object RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. type string Type indicates the type of the StatefulSetUpdateStrategy. Default is RollingUpdate. Possible enum values: - "OnDelete" triggers the legacy behavior. Version tracking and ordered rolling restarts are disabled. Pods are recreated from the StatefulSetSpec when they are manually deleted. When a scale operation is performed with this strategy,specification version indicated by the StatefulSet's currentRevision. - "RollingUpdate" indicates that update will be applied to all Pods in the StatefulSet with respect to the StatefulSet ordering constraints. When a scale operation is performed with this strategy, new Pods will be created from the specification version indicated by the StatefulSet's updateRevision. 18.1.4. .spec.updateStrategy.rollingUpdate Description RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. Type object Property Type Description maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding up. This can not be 0. Defaults to 1. This field is alpha-level and is only honored by servers that enable the MaxUnavailableStatefulSet feature. The field applies to all pods in the range 0 to Replicas-1. That means if there is any unavailable pod in the range 0 to Replicas-1, it will be counted towards MaxUnavailable. partition integer Partition indicates the ordinal at which the StatefulSet should be partitioned for updates. During a rolling update, all pods from ordinal Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched. This is helpful in being able to do a canary based deployment. The default value is 0. 18.1.5. .status Description StatefulSetStatus represents the current state of a StatefulSet. Type object Required replicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset. collisionCount integer collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a statefulset's current state. conditions[] object StatefulSetCondition describes the state of a statefulset at a certain point. currentReplicas integer currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by currentRevision. currentRevision string currentRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [0,currentReplicas). observedGeneration integer observedGeneration is the most recent generation observed for this StatefulSet. It corresponds to the StatefulSet's generation, which is updated on mutation by the API Server. readyReplicas integer readyReplicas is the number of pods created for this StatefulSet with a Ready Condition. replicas integer replicas is the number of Pods created by the StatefulSet controller. updateRevision string updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas) updatedReplicas integer updatedReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by updateRevision. 18.1.6. .status.conditions Description Represents the latest available observations of a statefulset's current state. Type array 18.1.7. .status.conditions[] Description StatefulSetCondition describes the state of a statefulset at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of statefulset condition. 18.2. API endpoints The following API endpoints are available: /apis/apps/v1/statefulsets GET : list or watch objects of kind StatefulSet /apis/apps/v1/watch/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets DELETE : delete collection of StatefulSet GET : list or watch objects of kind StatefulSet POST : create a StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} DELETE : delete a StatefulSet GET : read the specified StatefulSet PATCH : partially update the specified StatefulSet PUT : replace the specified StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} GET : watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status GET : read status of the specified StatefulSet PATCH : partially update status of the specified StatefulSet PUT : replace status of the specified StatefulSet 18.2.1. /apis/apps/v1/statefulsets Table 18.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind StatefulSet Table 18.2. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty 18.2.2. /apis/apps/v1/watch/statefulsets Table 18.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 18.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets Table 18.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 18.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of StatefulSet Table 18.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 18.8. Body parameters Parameter Type Description body DeleteOptions schema Table 18.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind StatefulSet Table 18.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.11. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty HTTP method POST Description create a StatefulSet Table 18.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.13. Body parameters Parameter Type Description body StatefulSet schema Table 18.14. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 202 - Accepted StatefulSet schema 401 - Unauthorized Empty 18.2.4. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets Table 18.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 18.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 18.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.5. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} Table 18.18. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 18.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a StatefulSet Table 18.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.21. Body parameters Parameter Type Description body DeleteOptions schema Table 18.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StatefulSet Table 18.23. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StatefulSet Table 18.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.25. Body parameters Parameter Type Description body Patch schema Table 18.26. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StatefulSet Table 18.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.28. Body parameters Parameter Type Description body StatefulSet schema Table 18.29. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty 18.2.6. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} Table 18.30. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 18.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 18.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.7. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status Table 18.33. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 18.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified StatefulSet Table 18.35. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StatefulSet Table 18.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.37. Body parameters Parameter Type Description body Patch schema Table 18.38. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StatefulSet Table 18.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.40. Body parameters Parameter Type Description body StatefulSet schema Table 18.41. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/workloads_apis/statefulset-apps-v1 |
Chapter 18. Post-installation security hardening | Chapter 18. Post-installation security hardening RHEL is designed with robust security features enabled by default. However, you can enhance its security further through additional hardening measures. For more information about: Installing security updates and displaying additional details about the updates to keep your RHEL systems secured against newly discovered threats and vulnerabilities, see Managing and monitoring security updates . Processes and practices for securing RHEL servers and workstations against local and remote intrusion, exploitation, and malicious activity, see Security hardening . Control how users and processes interact with the files on the system or control which users can perform which actions by mapping them to specific SELinux confined users, see Using SELinux . Tools and techniques to improve the security of your networks and lower the risks of data breaches and intrusions, see Securing networks . Packet filters, such as firewalls, that use rules to control incoming, outgoing, and forwarded network traffic, see Using and configuring firewalld and Getting started with nftables . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/post-installation-security-hardening_rhel-installer |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Google Cloud clusters. Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation in internal mode, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Google Cloud Deploy standalone Multicloud Object Gateway component | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_google_cloud/preface-ocs-gcp |
Chapter 45. Managing host groups using the IdM Web UI | Chapter 45. Managing host groups using the IdM Web UI Learn more about how to manage host groups and their members in the Web interface (Web UI) by using the following operations: Viewing host groups and their members Creating host groups Deleting host groups Adding host group members Removing host group members Adding host group member managers Removing host group member managers 45.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 45.2. Viewing host groups in the IdM Web UI Follow this procedure to view IdM host groups using the Web interface (Web UI). Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Click Identity Groups , and select the Host Groups tab. The page lists the existing host groups and their descriptions. You can search for a specific host group. Click on a group in the list to display the hosts that belong to this group. You can limit results to direct or indirect members. Select the Host Groups tab to display the host groups that belong to this group (nested host groups). You can limit results to direct or indirect members. 45.3. Creating host groups in the IdM Web UI Follow this procedure to create IdM host groups using the Web interface (Web UI). Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Click Identity Groups , and select the Host Groups tab. Click Add . The Add host group dialog appears. Provide the information about the group: name (required) and description (optional). Click Add to confirm. 45.4. Deleting host groups in the IdM Web UI Follow this procedure to delete IdM host groups using the Web interface (Web UI). Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Click Identity Groups and select the Host Groups tab. Select the IdM host group to remove, and click Delete . A confirmation dialog appears. Click Delete to confirm Note Removing a host group does not delete the group members from IdM. 45.5. Adding host group members in the IdM Web UI Follow this procedure to add host group members in IdM using the web interface (Web UI). Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Click Identity Groups and select the Host Groups tab. Click the name of the group to which you want to add members. Click the tab Hosts or Host groups depending on the type of members you want to add. The corresponding dialog appears. Select the hosts or host groups to add, and click the > arrow button to move them to the Prospective column. Click Add to confirm. 45.6. Removing host group members in the IdM Web UI Follow this procedure to remove host group members in IdM using the web interface (Web UI). Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Click Identity Groups and select the Host Groups tab. Click the name of the group from which you want to remove members. Click the tab Hosts or Host groups depending on the type of members you want to remove. Select the check box to the member you want to remove. Click Delete. A confirmation dialog appears. Click Delete to confirm. The selected members are deleted. 45.7. Adding IdM host group member managers using the Web UI Follow this procedure to add users or user groups as host group member managers in IdM using the web interface (Web UI). Member managers can add hosts group member managers to IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . You must have the name of the host group you are adding as member managers and the name of the host group you want them to manage. Procedure Click Identity Groups and select the Host Groups tab. Click the name of the group to which you want to add member managers. Click the member managers tab User Groups or Users depending on the type of member managers you want to add. The corresponding dialog appears. Click Add . Select the users or user groups to add, and click the > arrow button to move them to the Prospective column. Click Add to confirm. Note After you add a member manager to a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification On the Host Group dialog, verify the user group or user has been added to the member managers list of groups or users. 45.8. Removing IdM host group member managers using the Web UI Follow this procedure to remove users or user groups as host group member managers in IdM using the web interface (Web UI). Member managers can remove hosts group member managers from IdM host groups but cannot change the attributes of a host group. Prerequisites Administrator privileges for managing IdM or User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . You must have the name of the existing member manager host group you are removing and the name of the host group they are managing. Procedure Click Identity Groups and select the Host Groups tab. Click the name of the group from which you want to remove member managers. Click the member managers tab User Groups or Users depending on the type of member managers you want to remove. The corresponding dialog appears. Select the user or user groups to remove and click Delete . Click Delete to confirm. Note After you remove a member manager from a host group, the update may take some time to spread to all clients in your Identity Management environment. Verification On the Host Group dialog, verify the user group or user has been removed from the member managers list of groups or users. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-host-groups-using-the-idm-web-ui_managing-users-groups-hosts |
Chapter 16. Virtualization | Chapter 16. Virtualization Hyper-V guests work properly with VHDX files Previously, when running Red Hat Enterprise Linux as a guest on a Microsoft Hyper-V hypervisor with a large dynamic Hyper-V virtual hard disk (VHDX) attached and using the ext3 file system, a call trace in some cases appeared and made it impossible to shut down the guest. With this update, Red Hat Enterprise Linux guests on Windows Hyper-V handle VHDX files correctly, and the described problem no longer occurs. (BZ#982542) The hv_netvsc module works correctly with Hyper-V Due to a race condition, the hv_netvsc module previously in some cases terminated unexpectedly when it was unloading. This caused a kernel crash on Red Hat Enterprise Linux guests running on the Microsoft Hyper-V hypervisor. The race condition has been removed, which prevents the described kernel crashes from occurring. (BZ#1118163) Guests shut down correctly when processing interrupts Prior to this update, if processes that generate interrupts were active during the guest shut down sequence, the virtio driver in some cases did not correctly clear the interrupts. As a consequence, the guest kernel became unresponsive, which prevented the shut down from completing. With this update, the virtio driver processes interrupts more effectively, and guests now shut down reliably in the described scenario. (BZ#1199155) Consistent save times for taking guest snapshots Prior to this update, saving a KVM guest snapshot involved overwriting the state of the virtual machine using copy-on-write operations. As a consequence, taking every snapshot after the first one took an excessive amount of time. Now, the guest state written in the active layer is discarded after the snapshot is taken, which avoids the need for copy-on-write operations. As a result, saving subsequent snapshots is now as quick as saving the first one. (BZ# 1219908 ) The at program works correctly with virt-sysprep When using the virt-sysprep utility to create a Red Hat Enterprise Linux guest template, the at program in the resulting guest could not be used. This update ensures that virt-sysprep does not delete /var/spool/at/.SEQ files in these guests, and at now works as expected. (BZ# 1229305 ) Failed logical volume creation no longer deletes existing volumes Previously, when attempting to create a logical volume in a logical-volume pool that already contained a logical volume with the specified name, libvirt in some cases deleted the existing logical volume. This update adds more checks to determine the cause of failure when creating logical volumes, which prevents libvirt from incorrectly removing existing logical volumes in the described circumstances. (BZ# 1232170 ) Domain information from LIBVIRT-MIB.txt is loaded correctly Previously, the LIBVIRT-MIB.txt file in the libvirt-snmp package did not fully comply with the formatting rules of the Simple Network Management Protocol (SNMP). As a consequence, SNMP software could not load the file and thus failed to read the domain information it provides, such as exposed variables, their ranges, or certain named values. This update ensures that LIBVIRT-MIB.txt is fully compliant with SNMP formatting rules, and the file is now loaded as expected. (BZ#1242320) System log is no longer flooded with error messages about missing metadata Prior to this update, the libvirt library was logging the VIR_ERR_NO_DOMAIN_METADATA error code with the error priority, rather than the 'debug' severity usual for this kind of message. As a consequence, if the metadata APIs were used heavily while metadata entries were missing, the system log was flooded with irrelevant messages. With this update, the severity of VIR_ERR_NO_DOMAIN_METADATA has been lowered to debug , thus fixing this problem. (BZ#1260864) Guests with strict NUMA pinning boot more reliably When starting a virtual machine configured with strict Non-Uniform Memory Access (NUMA) pinning, the KVM module could not allocate memory from the Direct Memory Access (DMA) zones if the NUMA nodes were not included in the configured limits set by the libvirt daemon. This led to a Quick Emulator (QEMU) process failure, which in turn prevented the guest from booting. With this update, the cgroup limits are applied after the KVM allocates the memory, and the QEMU process, as well as the guest, now starts as expected. (BZ# 1263263 ) Kernel panics caused by struct kvm handling are fixed When creating a KVM guest, the struct kvm data structure corresponding to the virtual machine was in some cases not handled properly. This caused corruption in the kernel memory and triggered a kernel panic on the host. Error conditions during guest creation are now treated properly, which prevents the described kernel panic from occurring. (BZ#1270791) Limited KSM deduplication factor Previously, the kernel same-page merging (KSM) deduplication factor was not explicitly limited, which caused Red Hat Enterprise Linux hosts to have performance problems or become unresponsive in case of high workloads. This update limits the KSM deduplication factor, and thus eliminates the described problems with virtual memory operations related to KSM pages. (BZ#1262294) Hyper-V daemon services are no longer unavailable on slowly-booting Red Hat Enterprise Linux 6 guests Prior to this update, if a Red Hat Enterprise Linux 6 guest running on a Hyper-V hypervisor took a long time to boot, the hypervkvpd , hypervvssd , and hypervfcopy Hyper-V daemons in some cases failed to start due to a negotiation timeout. As a consequence, the guest could not use the services provided by these daemons, including online backup, file copy, and network settings. This update ensures that the Hyper-V daemons start properly in the described scenario, which makes the affected services available as expected. (BZ#1216950) Starting guests when using macvtap and Cisco VM-FEX no longer fails Prior to this update, on hosts using macvtap connections to Cisco Virtual Machine Fabric Extender (VM-FEX) network cards, starting a virtual machine failed with the following error message: This bug has been fixed, and starting guests on the described hosts now works as expected. (BZ#1251532) Faster startup for virt-manager on hosts with many network interfaces On hosts with very large numbers of bridged, VLAN, or bond interfaces, starting the virt-manager utility previously took a very long time. This update optimizes the netcf query that caused this delay, which significantly improves the start-up speed of virt-manager on the described systems. (BZ#1235959) | [
"internal error missing IFLA_VF_INFO in netlink response"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_virtualization |
Building your RHEL AI environment | Building your RHEL AI environment Red Hat Enterprise Linux AI 1.3 Creating accounts, initalizing RHEL AI, downloading models, and serving/chat customizations Red Hat RHEL AI Documentation Team | [
"rhc connect --organization <org id> --activation-key <created key>",
"sudo mkdir -p /etc/ilab sudo touch /etc/ilab/insights-opt-out",
"ilab system info",
"ilab config init",
"Generating config file and profiles: /home/user/.config/instructlab/config.yaml /home/user/.local/share/instructlab/internal/system_profiles/ We have detected the NVIDIA H100 X4 profile as an exact match for your system. -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"Please choose a system profile to use. System profiles apply to all parts of the config file and set hardware specific defaults for each command. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 4 You selected: NVIDIA Next, please select the specific hardware configuration that most closely matches your system. [0] No system profile [1] NVIDIA H100 X2 [2] NVIDIA H100 X8 [3] NVIDIA H100 X4 [4] NVIDIA L4 X8 [5] NVIDIA A100 X2 [6] NVIDIA A100 X8 [7] NVIDIA A100 X4 [8] NVIDIA L40S X4 [9] NVIDIA L40S X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 3",
"You selected: /Users/<user>/.local/share/instructlab/internal/system_profiles/nvidia/H100/h100_x4.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/",
"ilab config init --profile <path-to-system-profile>",
"ilab config init --profile ~/.local/share/instructlab/internal/system_profiles/amd/mi300x/mi300x_x8.yaml",
"├─ ~/.config/instructlab/config.yaml 1 ├─ ~/.cache/instructlab/models/ 2 ├─ ~/.local/share/instructlab/datasets 3 ├─ ~/.local/share/instructlab/taxonomy 4 ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 5",
"ilab config show",
"ilab config edit",
"ilab model download --repository docker://registry.redhat.io/rhelai1/knowledge-adapter-v3 --release latest",
"ilab model download --repository docker://<repository_and_model> --release <release>",
"ilab model download --repository docker://registry.redhat.io/rhelai1/granite-8b-starter-v1 --release latest",
"ilab model list",
"+-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/prometheus-8x7b-v2-0 | 2024-08-09 13:28:50 | 87.0 GB| | models/mixtral-8x7b-instruct-v0-1 | 2024-08-09 13:28:24 | 87.0 GB| | models/granite-8b-starter-v1 | 2024-08-09 14:28:40 | 16.6 GB| | models/granite-8b-lab-v1 | 2024-08-09 14:40:35 | 16.6 GB| +-----------------------------------+---------------------+---------+",
"ls ~/.cache/instructlab/models",
"granite-8b-starter-v1 granite-8b-lab-v1",
"ilab model serve",
"ilab model serve --model-path <model-path>",
"ilab model serve --model-path ~/.cache/instructlab/models/granite-8b-code-instruct",
"INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-8b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.",
"mkdir -p USDHOME/.config/systemd/user",
"cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF",
"systemctl --user daemon-reload",
"systemctl --user start ilab-serve.service",
"systemctl --user status ilab-serve.service",
"journalctl --user-unit ilab-serve.service",
"sudo loginctl enable-linger",
"systemctl --user stop ilab-serve.service",
"mkdir -p `pwd`/nginx/ssl/",
"cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4",
"openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf",
"openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout",
"mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf",
"podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx",
"ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url",
"curl --location 'https://rhelai.redhat.com:8443/v1' --header 'Content-Type: application/json' --header 'Authorization: Bearer <api-key>' --data '{ \"model\": \"/var/home/cloud-user/.cache/instructlab/models/granite-7b-redhat-lab\", \"messages\": [ { \"role\": \"system\", \"content\": \"You are a helpful assistant.\" }, { \"role\": \"user\", \"content\": \"Hello!\" } ] }' | jq .",
"openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt",
"sudo cp server.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust",
"cat server.crt >> USD(python -m certifi)",
"ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1",
"ilab model chat",
"ilab model chat --model <model-path>",
"ilab model chat --model ~/.cache/instructlab/models/granite-8b-code-instruct",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-8B-CODE-INSTRUCT (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]",
"export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())')",
"echo USDVLLM_API_KEY",
"ilab config edit",
"serve: vllm: vllm_args: - --api-key - <api-key-string>",
"ilab model chat",
"openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'}",
"ilab model chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html-single/building_your_rhel_ai_environment/index |
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server | Chapter 8. Uninstalling a cluster on IBM Power Virtual Server You can remove a cluster that you deployed to IBM Power(R) Virtual Server. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IBMCLOUD_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. You might have to run the openshift-install destroy command up to three times to ensure a proper cleanup. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/uninstalling-cluster-ibm-power-vs |
Chapter 4. Open source license | Chapter 4. Open source license GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (c) 2007 Free Software Foundation, Inc.< https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program- to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate | [
"<one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.",
"<program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details."
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_ansible_playbooks/assembly-open-source-license |
C.4. Other Restrictions | C.4. Other Restrictions For the list of all other restrictions and issues affecting virtualization read the Red Hat Enterprise Linux 7 Release Notes . The Red Hat Enterprise Linux 7 Release Notes cover the present new features, known issues, and restrictions as they are updated or discovered. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtualization_restrictions-other_restrictions |
Chapter 12. Etcd [operator.openshift.io/v1] | Chapter 12. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 12.1.1. .spec Description Type object Property Type Description controlPlaneHardwareSpeed string HardwareSpeed allows user to change the etcd tuning profile which configures the latency parameters for heartbeat interval and leader election timeouts allowing the cluster to tolerate longer round-trip-times between etcd members. Valid values are "", "Standard" and "Slower". "" means no opinion and the platform is left to choose a reasonable default which is subject to change without notice. failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 12.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. controlPlaneHardwareSpeed string ControlPlaneHardwareSpeed declares valid hardware speed tolerance levels generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 12.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 12.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Required nodeName Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 12.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/etcds DELETE : delete collection of Etcd GET : list objects of kind Etcd POST : create an Etcd /apis/operator.openshift.io/v1/etcds/{name} DELETE : delete an Etcd GET : read the specified Etcd PATCH : partially update the specified Etcd PUT : replace the specified Etcd /apis/operator.openshift.io/v1/etcds/{name}/status GET : read status of the specified Etcd PATCH : partially update status of the specified Etcd PUT : replace status of the specified Etcd 12.2.1. /apis/operator.openshift.io/v1/etcds HTTP method DELETE Description delete collection of Etcd Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Etcd Table 12.2. HTTP responses HTTP code Reponse body 200 - OK EtcdList schema 401 - Unauthorized Empty HTTP method POST Description create an Etcd Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body Etcd schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 202 - Accepted Etcd schema 401 - Unauthorized Empty 12.2.2. /apis/operator.openshift.io/v1/etcds/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the Etcd HTTP method DELETE Description delete an Etcd Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Etcd Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Etcd Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Etcd Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body Etcd schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty 12.2.3. /apis/operator.openshift.io/v1/etcds/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the Etcd HTTP method GET Description read status of the specified Etcd Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Etcd Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Etcd Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Etcd schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/etcd-operator-openshift-io-v1 |
Chapter 1. High availability services | Chapter 1. High availability services Red Hat OpenStack Platform (RHOSP) employs several technologies to provide the services required to implement high availability (HA). Service types Core container Core container services are Galera , RabbitMQ , Redis , and HAProxy . These services run on all Controller nodes and require specific management and constraints for the start, stop and restart actions. You use Pacemaker to launch, manage, and troubleshoot core container services. Note RHOSP uses the MariaDB Galera Cluster to manage database replication. Active-passive Active-passive services run on one Controller node at a time, and include services such as openstack-cinder-volume . To move an active-passive service, you must use Pacemaker to ensure that the correct stop-start sequence is followed. Systemd and plain container Systemd and plain container services are independent services that can withstand a service interruption. Therefore, if you restart a high availability service such as Galera, you do not need to manually restart any other service, such as nova-api . You can use systemd or Podman to directly manage systemd and plain container services. When orchestrating your HA deployment with the director, the director uses templates and Puppet modules to ensure that all services are configured and launched correctly. In addition, when troubleshooting HA issues, you must interact with services in the HA framework using the podman command or the systemctl command. Service modes HA services can run in one of the following modes: Active-active : Pacemaker runs the same service on multiple Controller nodes, and uses HAProxy to distribute traffic across the nodes or to a specific Controller with a single IP address. In some cases, HAProxy distributes traffic to active-active services with Round Robin scheduling. You can add more Controller nodes to improve performance. Active-passive : Services that are unable to run in active-active mode must run in active-passive mode. In this mode, only one instance of the service is active at a time. For example, HAProxy uses stick-table options to direct incoming Galera database connection requests to a single back-end service. This helps prevent too many simultaneous connections to the same data from multiple Galera nodes. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_deployment_and_usage/concept_ha-services |
Chapter 10. Diagnosing and Correcting Problems in a Cluster | Chapter 10. Diagnosing and Correcting Problems in a Cluster Clusters problems, by nature, can be difficult to troubleshoot. This is due to the increased complexity that a cluster of systems introduces as opposed to diagnosing issues on a single system. However, there are common issues that system administrators are more likely to encounter when deploying or administering a cluster. Understanding how to tackle those common issues can help make deploying and administering a cluster much easier. This chapter provides information about some common cluster issues and how to troubleshoot them. Additional help can be found in our knowledge base and by contacting an authorized Red Hat support representative. If your issue is related to the GFS2 file system specifically, you can find information about troubleshooting common GFS2 issues in the Global File System 2 document. 10.1. Configuration Changes Do Not Take Effect When you make changes to a cluster configuration, you must propagate those changes to every node in the cluster. When you configure a cluster using Conga , Conga propagates the changes automatically when you apply the changes. For information on propagating changes to cluster configuration with the ccs command, see Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . For information on propagating changes to cluster configuration with command line tools, see Section 9.4, "Updating a Configuration" . If you make any of the following configuration changes to your cluster, it is not necessary to restart the cluster after propagating those changes the changes to take effect. Deleting a node from the cluster configuration- except where the node count changes from greater than two nodes to two nodes. Adding a node to the cluster configuration- except where the node count changes from two nodes to greater than two nodes. Changing the logging settings. Adding, editing, or deleting HA services or VM components. Adding, editing, or deleting cluster resources. Adding, editing, or deleting failover domains. Changing any corosync or openais timers. If you make any other configuration changes to your cluster, however, you must restart the cluster to implement those changes. The following cluster configuration changes require a cluster restart to take effect: Adding or removing the two_node option from the cluster configuration file. Renaming the cluster. Adding, changing, or deleting heuristics for quorum disk, changing any quorum disk timers, or changing the quorum disk device. For these changes to take effect, a global restart of the qdiskd daemon is required. Changing the central_processing mode for rgmanager . For this change to take effect, a global restart of rgmanager is required. Changing the multicast address. Switching the transport mode from UDP multicast to UDP unicast, or switching from UDP unicast to UDP multicast. You can restart the cluster using Conga , the ccs command, or command line tools, For information on restarting a cluster with Conga , see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . For information on restarting a cluster with the ccs command, see Section 7.2, "Starting and Stopping a Cluster" . For information on restarting a cluster with command line tools, see Section 9.1, "Starting and Stopping the Cluster Software" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-troubleshoot-CA |
8.206. ruby | 8.206. ruby 8.206.1. RHBA-2014:1470 - ruby bug fix and enhancement update Updated ruby packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Ruby is an extensible, interpreted, object-oriented scripting language. It has features to process text files and to do system management tasks. This update also fixes the following bugs: Note The ruby package has been upgraded to upstream version 1.8.7, which provides a number of bug fixes and enhancements over the version. (BZ# 830098 ) This update also fixes the following bugs: Bug Fixes BZ# 784766 The Tracer module introduced with SystemTap probes in a release of the ruby package collided with native Tracer class implemented in Ruby. Consequently, when using Ruby with debugger or tracer, the following exception was raised: /usr/lib/ruby/1.8/tracer.rb:16: Tracer is not a class (TypeError) With this update, the Tracer module has been renamed to SystemTap, or DTrace alternatively. To apply this fix, instances of the Tracer.fire method should be changed to SystemTap.fire or DTrace.fire in previously written Ruby code. BZ# 802946 Prior to this update, ruby failed to start a SSL server in FIPS mode due to usage of a forbidden MD5 algorithm. With this update, MD5 has been replaced by SHA256, thus fixing this bug. BZ# 997886 , BZ# 1033864 Due to changes in OpenSSL configuration options, the ruby package was not compatible with builds of OpenSSL that have enabled support for Elliptic Curve Cryptography (ECC) introduced in Red Hat Enterprise Linux 6. Consequently, ruby failed to build. This update enables ECC support in Ruby, thus fixing the build problem. The ruby package has been upgraded to upstream version 1.8.7, which provides a number of bug fixes and enhancements over the version. (BZ#830098) All ruby users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ruby |
Chapter 5. Managing top level domain names | Chapter 5. Managing top level domain names This section introduces top-level domains and describes how to create and manage them in the Red Hat OpenStack Platform DNS service (designate). The way in which you manage what domain names users are allowed to create is through denylists. The topics included in this section are: Section 5.1, "About top-level domains" Section 5.2, "Creating top-level domains" Section 5.3, "Listing and showing top-level domains" Section 5.4, "Modifying top-level domains" Section 5.5, "Deleting top-level domains" Section 5.6, "About DNS service denylists" Section 5.7, "About DNS service regular expressions in denylists" Section 5.8, "Creating DNS service denylists" Section 5.9, "Listing and showing DNS service denylists" Section 5.10, "Modifying DNS service denylists" Section 5.11, "Deleting DNS service denylists" 5.1. About top-level domains You can use top-level domains (TLDs) to restrict the domains under which users can create zones. In the Domain Name System (DNS) the term TLD refers specifically to the set of domains that reside directly below the root, such as .com . In the Red Hat OpenStack Platform (RHOSP) DNS service (designate), a TLD can be any valid domain. Because TLDs define the set of allowed domains, the zone that a user creates must exist within one of the TLDs. If no TLDs have been created in the DNS service, then users can create any zone. TLDs do not have a policy that allows privileged users to create zones outside the allowed TLDs. Example After creating the .com TLD, if a user attempts to create a zone that is not contained within the .com TLD, the attempt fails. Sample output You can create, list, show, modify, and delete TLDs using the OpenStack Client openstack tld commands. Additional resources tld in the Command Line Interface Reference zone in the Command Line Interface Reference 5.2. Creating top-level domains Top-level domains (TLDs) enable you to restrict the domains under which users can create zones. In the Red Hat OpenStack Platform (RHOSP) DNS service (designate), a TLD can be any valid domain. To create TLDs, use the OpenStack Client openstack tld create command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example You create a TLD by running the openstack tld create command. Example For example, if you want to require that users create zones ending in .org , you can create a single .org TLD: Sample output Tip When using the openstack tld command , ensure that the fully qualified domain name (FQDN) that you enter has no trailing dot, for example, .net. . Verification Run the openstack tld list command, and confirm that your TLD exists. Example Additional resources tld create in the Command Line Interface Reference 5.3. Listing and showing top-level domains You can query the Red Hat OpenStack Platform DNS service (designate) database and list all of the top-level domains (TLDs), or display properties for a particular TLD. The OpenStack Client commands for doing this are openstack tld list and openstack tld show , respectively. Procedure Source your credentials file. Example Use the following command to list all of the TLDs in the DNS service database: Use the openstack tld show <TLD_NAME_or_ID> command to display the properties for a particular TLD. Example Additional resources tld list in the Command Line Interface Reference tld show in the Command Line Interface Reference 5.4. Modifying top-level domains The Red Hat OpenStack Platform (RHOSP) DNS service (designate) enables you to change various properties of a top-level domain (TLD), such as its name. You modify TLDs by using the OpenStack Client openstack tld set command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example You can modify a TLD in various ways by using the following command options: Note The earlier syntax diagram does not show the various formatting options for the openstack tld set command. For the list of all the command options, see the link in "Additional resources," later. In this example, the openstack tld set command renames the org TLD to example.net : Example Sample output Verification Run the openstack tld show <TLD_NAME_or_ID> command, and confirm that your modifications exist. Additional resources tld set in the Command Line Interface Reference 5.5. Deleting top-level domains The Red Hat OpenStack Platform (RHOSP) DNS service (designate) enables you to remove a top-level domain (TLD) by using the OpenStack Client openstack tld delete command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Obtain the ID or the name for the TLD that you want to delete, by running the following command: Using either the name or the ID from the step, enter the following command: There is no output when this command is successful. Verification Run the openstack tld show <TLD_NAME_or_ID> command, and verify that the TLD has been removed. Additional resources tld delete in the Command Line Interface Reference 5.6. About DNS service denylists The Red Hat OpenStack Platform (RHOSP) DNS service (designate) has a denylist feature that enables you to prevent users from creating zones with names that match a particular regular expression. For example, you might use a denylist to prevent users from: creating a specific zone. creating zones that contain a certain string. creating subzones of a certain zone. If example.com. is a member of a denylist, and a domain or a project user attempts to create a zone like, foo.example.com. or example.com. , they encounter an error: Note Users who satisfy the use_blacklisted_zone role-based access control can create zones with names that are on a denylist. By default, the only users who have this override are RHOSP system administrators. You can create, list, show, modify, and delete denylists using the OpenStack Client openstack zone blacklist commands. Additional resources zone blacklist create in the Command Line Interface Reference 5.7. About DNS service regular expressions in denylists A large part of working with denylists in the Red Hat OpenStack Platform DNS service (designate) is using regular expressions (regexes), which can be difficult to use. The Python documentation about regex might serve as a useful introduction. Online regex tools can assist when building and testing regexes for use with the denylist API. Additional resources Regular Expression HOWTO in the Python 3 documentation Section 5.6, "About DNS service denylists" 5.8. Creating DNS service denylists Denylists in the Red Hat OpenStack Platform DNS service (designate) enable you to prevent users from creating zones with names that match a particular regular expression. You create denylists with the OpenStack Client openstack zone blacklist create command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Use the openstack zone blacklist create command to create a denylist. In this example, the domain example.com. and all of its subdomains are added to a denylist. Example Sample output Verification Run the openstack zone blacklist list command, and confirm that your denylist exists. Additional resources zone blacklist create in the Command Line Interface Reference Section 5.7, "About DNS service regular expressions in denylists" 5.9. Listing and showing DNS service denylists You can query the Red Hat OpenStack Platform DNS service (designate) database and view all of the denylists, or display properties for a particular denylist. The OpenStack Client commands for doing this are openstack zone blacklist list and openstack zone blacklist show , respectively. Viewing all of the denylists can be helpful, because you must know the denylist ID to be able to use the other denylist commands. Procedure Source your credentials file. Example Use the following command to list the denylists in the DNS service database: With the denylist ID obtained in the step, use the openstack zone blacklist show <denylist_ID> command to display properties for a particular denylist. Example Additional resources zone blacklist list in the Command Line Interface Reference zone blacklist show in the Command Line Interface Reference 5.10. Modifying DNS service denylists The Red Hat OpenStack Platform DNS service (designate) enables you to modify denylists. For example, you might want to change the denylist to allow users to create a zone with a particular domain name that in the past was restricted. You modify denylists with the OpenStack Client openstack zone blacklist set command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Obtain the ID for the denylist that you want to modify, by running the following command: You can modify a denylist in various ways by using the following command options: Note The earlier syntax diagram does not show the various formatting options for the openstack zone blacklist set command. For the list of all the command options, see the link in "Additional resources," later. In this example, the regular expression (regex) is changed to allow the web.example.com domain: Example Sample output Verification Run the openstack zone blacklist show <denylist_ID> command, and confirm that your modifications exist. Additional resources zone blacklist set in the Command Line Interface Reference Section 5.7, "About DNS service regular expressions in denylists" 5.11. Deleting DNS service denylists Denylists in the Red Hat OpenStack Platform DNS service (designate) enable you to prevent users from creating zones with names that match a particular regular expression. You remove denylists with the OpenStack Client openstack zone blacklist delete command. Prerequisites You must be a RHOSP user with the admin role. Procedure As a cloud administrator, source your credentials file. Example Obtain the ID for the denylist that you want to delete, by running the following command: Using the ID from the step, enter the following command: There is no output when this command is successful. Verification Run the openstack zone blacklist show <denylist_ID> command, and verify that the denylist has been removed. Additional resources zone blacklist delete in the Command Line Interface Reference | [
"openstack zone create --email [email protected] test.net.",
"Invalid TLD",
"source ~/overcloudrc",
"openstack tld create --name org",
"+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2022-01-10T13:07:33.000000 | | description | None | | id | 9fd0a12d-511e-4024-bf76-6ec2e3e71edd | | name | org | | updated_at | None | +-------------+--------------------------------------+",
"openstack tld list --name zone1.cloud.example.com",
"source ~/overcloudrc",
"openstack tld list",
"openstack tld show org",
"source ~/overcloudrc",
"openstack tld set [--name NAME] [--description DESCRIPTION | --no-description] [TLD_ID | TLD_NAME]",
"openstack tld set org --name example.net",
"+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2022-01-10T13:07:33.000000 | | description | | | id | 9fd0a12d-511e-4024-bf76-6ec2e3e71edd | | name | example.net | | updated_at | 2022-01-10T22:35:20.000000 | +-------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack tld list",
"openstack tld delete <TLD_NAME_or_ID>",
"openstack zone create --email [email protected] example.com. Blacklisted zone name openstack zone create --email [email protected] foo.example.com. Blacklisted zone name",
"source ~/overcloudrc",
"openstack zone blacklist create --pattern \".*example.com.\"",
"+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2021-10-20T16:15:18.000000 | | description | None | | id | 7622e241-8c3d-4c03-a692-8747e3cf2658 | | pattern | .*example.com. | | updated_at | None | +-------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack zone blacklist list",
"openstack zone blacklist show 7622e241-8c3d-4c03-a692-8747e3cf2658",
"source ~/overcloudrc",
"openstack zone blacklist list",
"openstack zone blacklist set [--description DESCRIPTION | --no-description] denylist_ID",
"openstack zone blacklist set 81fbfe02-6bf9-4812-a40e-1522ab6862ca --pattern \".*web.example.com\"",
"+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2022-01-08T09:11:43.000000 | | description | None | | id | 81fbfe02-6bf9-4812-a40e-1522ab6862ca | | pattern | .*web.example.com | | updated_at | 2022-01-15T14:26:18.000000 | +-------------+--------------------------------------+",
"source ~/overcloudrc",
"openstack zone blacklist list",
"openstack zone blacklist delete <denylist_ID>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/manage-tlds_rhosp-dnsaas |
Chapter 1. Tooling Guide | Chapter 1. Tooling Guide 1.1. About Tooling Guide This guide introduces VS Code extensions for Red Hat build of Apache Camel and how to install and use Camel CLI. Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/camel-tooling-guide |
Chapter 2. Resource Adapter Development | Chapter 2. Resource Adapter Development 2.1. Developing Custom Adapters For situations in which an existing JCA Adapter (or other connector mechanism) is not suitable, Red Hat JBoss Data Virtualization provides a framework for developing custom JCA Adapters. Red Hat JBoss Data Virtualization uses standard JCA Adapters. Base classes for all of the required supporting JCA SPI (Service Provider Interface) classes are provided by the Red Hat JBoss Data Virtualization API. The JCA CCI (Common Client Interface) support is not provided because Red Hat JBoss Data Virtualization uses the translator API as its common client interface. Note If you are not familiar with the JCA API, read the JCA 1.5 Specification at http://docs.oracle.com/cd/E15523_01/integration.1111/e10231/intro.htm . The process for developing a Red Hat JBoss Data Virtualization JCA Adapter is as follows (the required classes can be found in org.teiid.resource.spi ): Define a Managed Connection Factory by extending the BasicManagedConnectionFactory class Define a Connection Factory by extending the BasicConnectionFactory class Define a Connection by extending the BasicConnection class Specify configuration properties in an ra.xml file Note The examples contained in this book are simplified and do not include support for transactions or security which would add significant complexity. For sample resource adapter code, see the teiid/connectors directory of the Red Hat JBoss Data Virtualization 6.4 Source Code ZIP file. This ZIP file can be downloaded from the Red Hat Customer Portal at https://access.redhat.com . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-resource_adapter_development |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/pr01 |
Chapter 4. Configure storage for OpenShift Container Platform services | Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 4.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 4.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. | [
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'",
"oc describe noobaa",
"oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry",
"oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d",
"oc describe pod <image-registry-name>",
"oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>",
"oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>",
"apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]",
"oc get clusterresourcequota -A oc describe clusterresourcequota -A",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/configure-storage-for-openshift-container-platform-services_rhodf |
9.3. Configuration Tools | 9.3. Configuration Tools Red Hat Enterprise Linux provides a number of tools to assist administrators in configuring the system. This section outlines the available tools and provides examples of how they can be used to solve network related performance problems in Red Hat Enterprise Linux 7. However, it is important to keep in mind that network performance problems are sometimes the result of hardware malfunction or faulty infrastructure. Red Hat highly recommends verifying that your hardware and infrastructure are working as expected before using these tools to tune the network stack. Further, some network performance problems are better resolved by altering the application than by reconfiguring your network subsystem. It is generally a good idea to configure your application to perform frequent posix calls, even if this means queuing data in the application space, as this allows data to be stored flexibly and swapped in or out of memory as required. 9.3.1. Tuned Profiles for Network Performance The Tuned service provides a number of different profiles to improve performance in a number of specific use cases. The following profiles can be useful for improving networking performance. latency-performance network-latency network-throughput For more information about these profiles, see Section A.5, "tuned-adm" . 9.3.2. Configuring the Hardware Buffer If a large number of packets are being dropped by the hardware buffer, there are a number of potential solutions. Slow the input traffic Filter incoming traffic, reduce the number of joined multicast groups, or reduce the amount of broadcast traffic to decrease the rate at which the queue fills. For details of how to filter incoming traffic, see the Red Hat Enterprise Linux 7 Security Guide . For details about multicast groups, see the Red Hat Enterprise Linux 7 Clustering documentation. For details about broadcast traffic, see the Red Hat Enterprise Linux 7 System Administrator's Guide , or documentation related to the device you want to configure. Resize the hardware buffer queue Reduce the number of packets being dropped by increasing the size of the queue so that the it does not overflow as easily. You can modify the rx/tx parameters of the network device with the ethtool command: Change the drain rate of the queue Device weight refers to the number of packets a device can receive at one time (in a single scheduled processor access). You can increase the rate at which a queue is drained by increasing its device weight, which is controlled by the dev_weight parameter. This parameter can be temporarily altered by changing the contents of the /proc/sys/net/core/dev_weight file, or permanently altered with sysctl , which is provided by the procps-ng package. Altering the drain rate of a queue is usually the simplest way to mitigate poor network performance. However, increasing the number of packets that a device can receive at one time uses additional processor time, during which no other processes can be scheduled, so this can cause other performance problems. 9.3.3. Configuring Interrupt Queues If analysis reveals high latency, your system may benefit from poll-based rather than interrupt-based packet receipt. 9.3.3.1. Configuring Busy Polling Busy polling helps reduce latency in the network receive path by allowing socket layer code to poll the receive queue of a network device, and disabling network interrupts. This removes delays caused by the interrupt and the resultant context switch. However, it also increases CPU utilization. Busy polling also prevents the CPU from sleeping, which can incur additional power consumption. Busy polling is disabled by default. To enable busy polling on specific sockets, do the following. Set sysctl.net.core.busy_poll to a value other than 0 . This parameter controls the number of microseconds to wait for packets on the device queue for socket poll and selects. Red Hat recommends a value of 50 . Add the SO_BUSY_POLL socket option to the socket. To enable busy polling globally, you must also set sysctl.net.core.busy_read to a value other than 0 . This parameter controls the number of microseconds to wait for packets on the device queue for socket reads. It also sets the default value of the SO_BUSY_POLL option. Red Hat recommends a value of 50 for a small number of sockets, and a value of 100 for large numbers of sockets. For extremely large numbers of sockets (more than several hundred), use epoll instead. Busy polling behavior is supported by the following drivers. These drivers are also supported on Red Hat Enterprise Linux 7. bnx2x be2net ixgbe mlx4 myri10ge As of Red Hat Enterprise Linux 7.1, you can also run the following command to check whether a specific device supports busy polling. If this returns busy-poll: on [fixed] , busy polling is available on the device. 9.3.4. Configuring Socket Receive Queues If analysis suggests that packets are being dropped because the drain rate of a socket queue is too slow, there are several ways to alleviate the performance issues that result. Decrease the speed of incoming traffic Decrease the rate at which the queue fills by filtering or dropping packets before they reach the queue, or by lowering the weight of the device. Increase the depth of the application's socket queue If a socket queue that receives a limited amount of traffic in bursts, increasing the depth of the socket queue to match the size of the bursts of traffic may prevent packets from being dropped. 9.3.4.1. Decrease the Speed of Incoming Traffic Filter incoming traffic or lower the network interface card's device weight to slow incoming traffic. For details of how to filter incoming traffic, see the Red Hat Enterprise Linux 7 Security Guide . Device weight refers to the number of packets a device can receive at one time (in a single scheduled processor access). Device weight is controlled by the dev_weight parameter. This parameter can be temporarily altered by changing the contents of the /proc/sys/net/core/dev_weight file, or permanently altered with sysctl , which is provided by the procps-ng package. 9.3.4.2. Increasing Queue Depth Increasing the depth of an application socket queue is typically the easiest way to improve the drain rate of a socket queue, but it is unlikely to be a long-term solution. To increase the depth of a queue, increase the size of the socket receive buffer by making either of the following changes: Increase the value of /proc/sys/net/core/rmem_default This parameter controls the default size of the receive buffer used by sockets. This value must be smaller than or equal to the value of /proc/sys/net/core/rmem_max . Use setsockopt to configure a larger SO_RCVBUF value This parameter controls the maximum size in bytes of a socket's receive buffer. Use the getsockopt system call to determine the current value of the buffer. For further information, see the socket (7) manual page. 9.3.5. Configuring Receive-Side Scaling (RSS) Receive-Side Scaling (RSS), also known as multi-queue receive, distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPUs. RSS can be used to relieve bottlenecks in receive interrupt processing caused by overloading a single CPU, and to reduce network latency. To determine whether your network interface card supports RSS, check whether multiple interrupt request queues are associated with the interface in /proc/interrupts . For example, if you are interested in the p1p1 interface: The preceding output shows that the NIC driver created 6 receive queues for the p1p1 interface ( p1p1-0 through p1p1-5 ). It also shows how many interrupts were processed by each queue, and which CPU serviced the interrupt. In this case, there are 6 queues because by default, this particular NIC driver creates one queue per CPU, and this system has 6 CPUs. This is a fairly common pattern among NIC drivers. Alternatively, you can check the output of ls -1 /sys/devices/*/*/ device_pci_address /msi_irqs after the network driver is loaded. For example, if you are interested in a device with a PCI address of 0000:01:00.0 , you can list the interrupt request queues of that device with the following command: RSS is enabled by default. The number of queues (or the CPUs that should process network activity) for RSS are configured in the appropriate network device driver. For the bnx2x driver, it is configured in num_queues . For the sfc driver, it is configured in the rss_cpus parameter. Regardless, it is typically configured in /sys/class/net/ device /queues/ rx-queue / , where device is the name of the network device (such as eth1 ) and rx-queue is the name of the appropriate receive queue. When configuring RSS, Red Hat recommends limiting the number of queues to one per physical CPU core. Hyper-threads are often represented as separate cores in analysis tools, but configuring queues for all cores including logical cores such as hyper-threads has not proven beneficial to network performance. When enabled, RSS distributes network processing equally between available CPUs based on the amount of processing each CPU has queued. However, you can use the ethtool --show-rxfh-indir and --set-rxfh-indir parameters to modify how network activity is distributed, and weight certain types of network activity as more important than others. The irqbalance daemon can be used in conjunction with RSS to reduce the likelihood of cross-node memory transfers and cache line bouncing. This lowers the latency of processing network packets. 9.3.6. Configuring Receive Packet Steering (RPS) Receive Packet Steering (RPS) is similar to RSS in that it is used to direct packets to specific CPUs for processing. However, RPS is implemented at the software level, and helps to prevent the hardware queue of a single network interface card from becoming a bottleneck in network traffic. RPS has several advantages over hardware-based RSS: RPS can be used with any network interface card. It is easy to add software filters to RPS to deal with new protocols. RPS does not increase the hardware interrupt rate of the network device. However, it does introduce inter-processor interrupts. RPS is configured per network device and receive queue, in the /sys/class/net/ device /queues/ rx-queue /rps_cpus file, where device is the name of the network device (such as eth0 ) and rx-queue is the name of the appropriate receive queue (such as rx-0 ). The default value of the rps_cpus file is 0 . This disables RPS, so the CPU that handles the network interrupt also processes the packet. To enable RPS, configure the appropriate rps_cpus file with the CPUs that should process packets from the specified network device and receive queue. The rps_cpus files use comma-delimited CPU bitmaps. Therefore, to allow a CPU to handle interrupts for the receive queue on an interface, set the value of their positions in the bitmap to 1. For example, to handle interrupts with CPUs 0, 1, 2, and 3, set the value of rps_cpus to f , which is the hexadecimal value for 15. In binary representation, 15 is 00001111 (1+2+4+8). For network devices with single transmit queues, best performance can be achieved by configuring RPS to use CPUs in the same memory domain. On non-NUMA systems, this means that all available CPUs can be used. If the network interrupt rate is extremely high, excluding the CPU that handles network interrupts may also improve performance. For network devices with multiple queues, there is typically no benefit to configuring both RPS and RSS, as RSS is configured to map a CPU to each receive queue by default. However, RPS may still be beneficial if there are fewer hardware queues than CPUs, and RPS is configured to use CPUs in the same memory domain. 9.3.7. Configuring Receive Flow Steering (RFS) Receive Flow Steering (RFS) extends RPS behavior to increase the CPU cache hit rate and thereby reduce network latency. Where RPS forwards packets based solely on queue length, RFS uses the RPS back end to calculate the most appropriate CPU, then forwards packets based on the location of the application consuming the packet. This increases CPU cache efficiency. RFS is disabled by default. To enable RFS, you must edit two files: /proc/sys/net/core/rps_sock_flow_entries Set the value of this file to the maximum expected number of concurrently active connections. We recommend a value of 32768 for moderate server loads. All values entered are rounded up to the nearest power of 2 in practice. /sys/class/net/ device /queues/ rx-queue /rps_flow_cnt Replace device with the name of the network device you wish to configure (for example, eth0 ), and rx-queue with the receive queue you wish to configure (for example, rx-0 ). Set the value of this file to the value of rps_sock_flow_entries divided by N , where N is the number of receive queues on a device. For example, if rps_flow_entries is set to 32768 and there are 16 configured receive queues, rps_flow_cnt should be set to 2048 . For single-queue devices, the value of rps_flow_cnt is the same as the value of rps_sock_flow_entries . Data received from a single sender is not sent to more than one CPU. If the amount of data received from a single sender is greater than a single CPU can handle, configure a larger frame size to reduce the number of interrupts and therefore the amount of processing work for the CPU. Alternatively, consider NIC offload options or faster CPUs. Consider using numactl or taskset in conjunction with RFS to pin applications to specific cores, sockets, or NUMA nodes. This can help prevent packets from being processed out of order. 9.3.8. Configuring Accelerated RFS Accelerated RFS boosts the speed of RFS by adding hardware assistance. Like RFS, packets are forwarded based on the location of the application consuming the packet. Unlike traditional RFS, however, packets are sent directly to a CPU that is local to the thread consuming the data: either the CPU that is executing the application, or a CPU local to that CPU in the cache hierarchy. Accelerated RFS is only available if the following conditions are met: Accelerated RFS must be supported by the network interface card. Accelerated RFS is supported by cards that export the ndo_rx_flow_steer() netdevice function. ntuple filtering must be enabled. Once these conditions are met, CPU to queue mapping is deduced automatically based on traditional RFS configuration. That is, CPU to queue mapping is deduced based on the IRQ affinities configured by the driver for each receive queue. Refer to Section 9.3.7, "Configuring Receive Flow Steering (RFS)" for details on configuring traditional RFS. Red Hat recommends using accelerated RFS wherever using RFS is appropriate and the network interface card supports hardware acceleration. | [
"ethtool --set-ring devname value",
"ethtool -k device | grep \"busy-poll\"",
"egrep 'CPU|p1p1' /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 89: 40187 0 0 0 0 0 IR-PCI-MSI-edge p1p1-0 90: 0 790 0 0 0 0 IR-PCI-MSI-edge p1p1-1 91: 0 0 959 0 0 0 IR-PCI-MSI-edge p1p1-2 92: 0 0 0 3310 0 0 IR-PCI-MSI-edge p1p1-3 93: 0 0 0 0 622 0 IR-PCI-MSI-edge p1p1-4 94: 0 0 0 0 0 2475 IR-PCI-MSI-edge p1p1-5",
"ls -1 /sys/devices/*/*/0000:01:00.0/msi_irqs 101 102 103 104 105 106 107 108 109"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Networking-Configuration_tools |
Chapter 4. Build [config.openshift.io/v1] | Chapter 4. Build [config.openshift.io/v1] Description Build configures the behavior of OpenShift builds for the entire cluster. This includes default settings that can be overridden in BuildConfig objects, and overrides which are applied to all builds. The canonical name is "cluster" Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec holds user-settable values for the build controller configuration 4.1.1. .spec Description Spec holds user-settable values for the build controller configuration Type object Property Type Description additionalTrustedCA object AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. buildDefaults object BuildDefaults controls the default information for Builds buildOverrides object BuildOverrides controls override settings for builds 4.1.2. .spec.additionalTrustedCA Description AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.3. .spec.buildDefaults Description BuildDefaults controls the default information for Builds Type object Property Type Description defaultProxy object DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. env array Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build env[] object EnvVar represents an environment variable present in a Container. gitProxy object GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. imageLabels[] object resources object Resources defines resource requirements to execute the build. 4.1.4. .spec.buildDefaults.defaultProxy Description DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.5. .spec.buildDefaults.defaultProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.6. .spec.buildDefaults.env Description Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build Type array 4.1.7. .spec.buildDefaults.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 4.1.8. .spec.buildDefaults.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 4.1.9. .spec.buildDefaults.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 4.1.10. .spec.buildDefaults.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 4.1.11. .spec.buildDefaults.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 4.1.12. .spec.buildDefaults.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 4.1.13. .spec.buildDefaults.gitProxy Description GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.14. .spec.buildDefaults.gitProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.15. .spec.buildDefaults.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. Type array 4.1.16. .spec.buildDefaults.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.17. .spec.buildDefaults.resources Description Resources defines resource requirements to execute the build. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 4.1.18. .spec.buildDefaults.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 4.1.19. .spec.buildDefaults.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 4.1.20. .spec.buildOverrides Description BuildOverrides controls override settings for builds Type object Property Type Description forcePull boolean ForcePull overrides, if set, the equivalent value in the builds, i.e. false disables force pull for all builds, true enables force pull for all builds, independently of what each build specifies itself imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. imageLabels[] object nodeSelector object (string) NodeSelector is a selector which must be true for the build pod to fit on a node tolerations array Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 4.1.21. .spec.buildOverrides.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. Type array 4.1.22. .spec.buildOverrides.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.23. .spec.buildOverrides.tolerations Description Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. Type array 4.1.24. .spec.buildOverrides.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 4.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/builds DELETE : delete collection of Build GET : list objects of kind Build POST : create a Build /apis/config.openshift.io/v1/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/config.openshift.io/v1/builds/{name}/status GET : read status of the specified Build PATCH : partially update status of the specified Build PUT : replace status of the specified Build 4.2.1. /apis/config.openshift.io/v1/builds HTTP method DELETE Description delete collection of Build Table 4.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Build Table 4.2. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.4. Body parameters Parameter Type Description body Build schema Table 4.5. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 4.2.2. /apis/config.openshift.io/v1/builds/{name} Table 4.6. Global path parameters Parameter Type Description name string name of the Build HTTP method DELETE Description delete a Build Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body Build schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 4.2.3. /apis/config.openshift.io/v1/builds/{name}/status Table 4.15. Global path parameters Parameter Type Description name string name of the Build HTTP method GET Description read status of the specified Build Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Build Table 4.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Build Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Build schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/config_apis/build-config-openshift-io-v1 |
Chapter 200. Kubernetes Job Component | Chapter 200. Kubernetes Job Component Available as of Camel version 2.23 The Kubernetes Job component is one of Kubernetes Components which provides a producer to execute kubernetes job operations. 200.1. Component Options The Kubernetes Job component has no options. 200.2. Endpoint Options The Kubernetes Job endpoint is configured using URI syntax: with the following path and query parameters: 200.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 200.2.2. Query Parameters (28 parameters): Name Description Default Type apiVersion (common) The Kubernetes API Version to use String dnsDomain (common) The dns domain, used for ServiceCall EIP String kubernetesClient (common) Default KubernetesClient to use if provided KubernetesClient portName (common) The port name, used for ServiceCall EIP String portProtocol (common) The port protocol, used for ServiceCall EIP tcp String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean labelKey (consumer) The Consumer Label key when watching at some resources String labelValue (consumer) The Consumer Label value when watching at some resources String namespace (consumer) The namespace String poolSize (consumer) The Consumer pool size 1 int resourceName (consumer) The Consumer Resource Name we would like to watch String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern operation (producer) Producer operation to do on Kubernetes String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 200.3. Supported Producer Operation The Kubernetes Job component supports following producer operations: listJob listJobByLabels getJob createJob replaceJob deleteJob 200.4. Kubernetes Job Producer Examples listJob: this operation list the jobs on a kubernetes cluster from("direct:list"). toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob"). to("mock:result"); This operation return a List of Job from your cluster listJobByLabels: this operation list the jobs by labels on a kubernetes cluster from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF("kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels"). to("mock:result"); This operation return a List of Jobs from your cluster, using a label selector (with key1 and key2, with value value1 and value2) createJob: This operation create a job on a Kubernetes Cluster We have a wonderful example of this operation thanks to Emmerson Miranda from this Java test import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri("timer:foo?delay=1000&repeatCount=1") private Endpoint inputEndpoint; @Inject @Uri("log:output") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId("kubernetes-jobcreate-client") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, "camel-job"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put("jobLabelKey1", "value1"); joblabels.put("jobLabelKey2", "value2"); joblabels.put("app", "jobFromCamelApp"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF("kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=" + KubernetesOperations.CREATE_JOB_OPERATION) .log("Job created:") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy("Never"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put("jobMetadataAnnotation1", "random value"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put("podLabelKey1", "value1"); podlabels.put("podLabelKey2", "value2"); podlabels.put("app", "podFromCamelApp"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName("pi"); container.setImage("perl"); List<String> command = new ArrayList<String>(); command.add("echo"); command.add("Job created from Apache Camel code at " + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } } 200.5. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"kubernetes-job:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJob\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, labels); } }); toF(\"kubernetes-job:///?kubernetesClient=#kubernetesClient&operation=listJobByLabels\"). to(\"mock:result\");",
"import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.inject.Inject; import org.apache.camel.Endpoint; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.cdi.Uri; import org.apache.camel.component.kubernetes.KubernetesConstants; import org.apache.camel.component.kubernetes.KubernetesOperations; import io.fabric8.kubernetes.api.model.Container; import io.fabric8.kubernetes.api.model.ObjectMeta; import io.fabric8.kubernetes.api.model.PodSpec; import io.fabric8.kubernetes.api.model.PodTemplateSpec; import io.fabric8.kubernetes.api.model.batch.JobSpec; public class KubernetesCreateJob extends RouteBuilder { @Inject @Uri(\"timer:foo?delay=1000&repeatCount=1\") private Endpoint inputEndpoint; @Inject @Uri(\"log:output\") private Endpoint resultEndpoint; @Override public void configure() { // you can configure the route rule with Java DSL here from(inputEndpoint) .routeId(\"kubernetes-jobcreate-client\") .process(exchange -> { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_NAME, \"camel-job\"); //DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); Map<String, String> joblabels = new HashMap<String, String>(); joblabels.put(\"jobLabelKey1\", \"value1\"); joblabels.put(\"jobLabelKey2\", \"value2\"); joblabels.put(\"app\", \"jobFromCamelApp\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_LABELS, joblabels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_JOB_SPEC, generateJobSpec()); }) .toF(\"kubernetes-job:///{{kubernetes-master-url}}?oauthToken={{kubernetes-oauth-token:}}&operation=\" + KubernetesOperations.CREATE_JOB_OPERATION) .log(\"Job created:\") .process(exchange -> { System.out.println(exchange.getIn().getBody()); }) .to(resultEndpoint); } private JobSpec generateJobSpec() { JobSpec js = new JobSpec(); PodTemplateSpec pts = new PodTemplateSpec(); PodSpec ps = new PodSpec(); ps.setRestartPolicy(\"Never\"); ps.setContainers(generateContainers()); pts.setSpec(ps); ObjectMeta metadata = new ObjectMeta(); Map<String, String> annotations = new HashMap<String, String>(); annotations.put(\"jobMetadataAnnotation1\", \"random value\"); metadata.setAnnotations(annotations); Map<String, String> podlabels = new HashMap<String, String>(); podlabels.put(\"podLabelKey1\", \"value1\"); podlabels.put(\"podLabelKey2\", \"value2\"); podlabels.put(\"app\", \"podFromCamelApp\"); metadata.setLabels(podlabels); pts.setMetadata(metadata); js.setTemplate(pts); return js; } private List<Container> generateContainers() { Container container = new Container(); container.setName(\"pi\"); container.setImage(\"perl\"); List<String> command = new ArrayList<String>(); command.add(\"echo\"); command.add(\"Job created from Apache Camel code at \" + (new Date())); container.setCommand(command); List<Container> containers = new ArrayList<Container>(); containers.add(container); return containers; } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-job-component |
16.3. Tiering Limitations (Deprecated) | 16.3. Tiering Limitations (Deprecated) Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. The following limitations apply to the use Tiering feature: Native client support for tiering is limited to Red Hat Enterprise Linux version 6.7, 6.8 and 7.x clients. Tiered volumes cannot be mounted by Red Hat Enterprise Linux 5.x clients. Tiering works only with cache friendly workloads. Attaching a tier volume to a cache unfriendly workload will lead to slow performance. In a cache friendly workload, most of the reads and writes are accessing a subset of the total amount of data. And, this subset fits on the hot tier. This subset should change only infrequently. Tiering feature is supported only on Red Hat Enterprise Linux 7 based Red Hat Gluster Storage. Tiering feature is not supported on Red Hat Enterprise Linux 6 based Red Hat Gluster Storage. Only Fuse and gluster-nfs access is supported. Server Message Block (SMB) and nfs-ganesha access to tiered volume is not supported. Creating snapshot of a tiered volume is supported. Snapshot clones are not supported with the tiered volumes. When you run tier detach commit or tier detach force , ongoing I/O operations may fail with a Transport endpoint is not connected error. Files with hardlinks and softlinks are not migrated. Files on which POSIX locks has been taken are not migrated until all locks are released. Add brick, remove brick, and rebalance operations are not supported on the tiered volume. For information on expanding a tiered volume, see Section 11.7.1, "Expanding a Tiered Volume" and for information on shrinking a tiered volume, see Section 11.8.2, "Shrinking a Tiered Volume " | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_data_tiering-limitations |
Chapter 1. Introduction | Chapter 1. Introduction Past versions of Red Hat OpenStack Platform used services managed with Systemd. However, more recent version of OpenStack Platform now use containers to run services. Some administrators might not have a good understanding of how containerized OpenStack Platform services operate, and so this guide aims to help you understand OpenStack Platform container images and containerized services. This includes: How to obtain and modify container images How to manage containerized services in the overcloud Understanding how containers differ from Systemd services The main goal is to help you gain enough knowledge of containerized OpenStack Platform services to transition from a Systemd-based environment to a container-based environment. 1.1. Containerized Services and Kolla Each of the main Red Hat OpenStack Platform services run in containers. This provides a method of keep each service within its own isolated namespace separated from the host. This means: The deployment of services is performed by pulling container images from the Red Hat Custom Portal and running them. The management functions, like starting and stopping services, operate through the podman command. Upgrading containers require pulling new container images and replacing the existing containers with newer versions. Red Hat OpenStack Platform uses a set of containers built and managed with the kolla toolset. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/transitioning_to_containerized_services/introduction |
Chapter 1. Data Grid Operator 8.5 | Chapter 1. Data Grid Operator 8.5 Get version details for Data Grid Operator 8.5 and information about issues. 1.1. Data Grid Operator 8.5.4 What is new in 8.5.4. Setting CPU and memory limits in Batch CR With this update, you can limit the number of CPU requests and memory allocation in a Batch Custom Resource (CR). For example: apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: exampleBatch spec: cluster: infinispan configMap: mybatch-config-map container: cpu: "2000m:1000m" 1 memory: "2Gi:1Gi" 2 1 The CPU resources, where 2000m is the maximum limit (2 CPU cores) and 1000m is the guaranteed request (1 CPU core). 2 The memory resources, where 2Gi is the maximum limit and 1Gi is the guaranteed request. Customizing log display in log traces You can now customize the log display for Data Grid log traces by defining the spec.logging.pattern field in your Infinispan CR. If you do not define a custom pattern, the default format is the following: For more information, see Adjusting log pattern . Support for auto scaling with HorizontalPodAutoscaler StatefulSets or Deployments can now be automatically scaled up or down based on specified metrics by defining a HorizontalPodAutoscaler resource in the same namespace as the Infinispan CR. For more information, see Auto Scaling . 1.2. Data Grid Operator 8.5.3 What's new in 8.5.3. Automatic reloading of SSL/TLS certificates Starting with Data Grid 8.5.1, Data Grid monitors keystore files for changes and automatically reloads them, without requiring a server or client restart, when certificates are renewed. Therefore, with Data Grid Operator 8.5.3, StatefulSet rolling update is not triggered on key or truststore update in a server when managing Data Grid 8.5.1 Operands because it is not required. 1.3. Data Grid Operator 8.5.0 What's new in 8.5.0. Ability to configure InitContainer resource You can now configure the InitContainer resource. Previously, if a LimitRange was in effect for the deployment namespace, then the InitContainer would be restricted to these resource values causing issues such as OutOfMemoryError. You can configure InitContainer resource configuration in the Data Grid CR as follows: spec: dependencies: initContainer: cpu: "2000m:1000m" memory: "2Gi:1Gi" Ability to define Batch resource CPU and memory request/limits You can now define CPU and memory request/limits for Batch Job created by the Operator. You can define the resource request/limits in the Batch CR as follows: apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan configMap: mybatch-config-map container: cpu: "2000m:1000m" memory: "2Gi:1Gi" TLSv1.3 encryption for cross-site encryption The default encryption protocol for cross-site is now TLSv1.3 instead of TLSv1.2. Ability to define TopologyPodConstraints and Tolerations in StatefulSet You can now configure more advanced high availability configurations by defining TopologyPodConstraints and Tolerations in spec.statefulSet . Example Cache service type removed RHDG 8.5 removes the Cache service type cache. Instead, use the DataGrid service type to automate complex operations such as cluster upgrades and data migration. Cloud events removed RHDG 8.5 removes cloud events integration. 1.4. Data Grid Operator 8.5.x release information The following table provides detailed version information for Data Grid Operator. Note Data Grid Operator versions do not always directly correspond to Data Grid versions because the release schedule is different. Data Grid Operator version Data Grid version Operand versions Features 8.5.4 8.5.2 8.5.2-1 8.5.1-1 8.5.0-3 8.5.0-2 8.5.0-1 8.4.8-1 8.4.7-1 8.4.6-2 8.4.6-1 8.4.5-2 8.4.5-1 8.4.4-1 8.4.3-2 8.4.3-1 8.4.2-1 8.4.1-3 8.4.1-2 8.4.1-1 8.4.0-2 8.4.0-1 Includes several bug fixes. 8.5.3 8.5.1 8.5.1-1 8.5.0-3 8.5.0-2 8.5.0-1 8.4.8-1 8.4.7-1 8.4.6-2 8.4.6-1 8.4.5-2 8.4.5-1 8.4.4-1 8.4.3-2 8.4.3-1 8.4.2-1 8.4.1-3 8.4.1-2 8.4.1-1 8.4.0-2 8.4.0-1 Includes several bug fixes. 8.5.2 8.5.0 8.5.0-3 8.5.0-2 8.5.0-1 8.4.8-1 8.4.7-1 8.4.6-2 8.4.6-1 8.4.5-2 8.4.5-1 8.4.4-1 8.4.3-2 8.4.3-1 8.4.2-1 8.4.1-3 8.4.1-2 8.4.1-1 8.4.0-2 8.4.0-1 Includes several bug fixes. 8.5.1 8.5.0 8.5.0-2 8.5.0-1 8.4.8-1 8.4.7-1 8.4.6-2 8.4.6-1 8.4.5-2 8.4.5-1 8.4.4-1 8.4.3-2 8.4.3-1 8.4.2-1 8.4.1-3 8.4.1-2 8.4.1-1 8.4.0-2 8.4.0-1 Includes several bug fixes. 8.5.0 8.5.0 8.5.0-1 8.4.8-1 8.4.7-1 8.4.6-2 8.4.6-1 8.4.5-2 8.4.5-1 8.4.4-1 8.4.3-2 8.4.3-1 8.4.2-1 8.4.1-3 8.4.1-2 8.4.1-1 8.4.0-2 8.4.0-1 Includes several bug fixes. | [
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: exampleBatch spec: cluster: infinispan configMap: mybatch-config-map container: cpu: \"2000m:1000m\" 1 memory: \"2Gi:1Gi\" 2",
"%d{HH:mm:ss,SSS} %-5p (%t) [%c] %m%throwable%n",
"spec: dependencies: initContainer: cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\"",
"apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan configMap: mybatch-config-map container: cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\"",
"kind: Infinispan spec: scheduling: affinity: tolerations: topologySpreadConstraints:"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_8.5_release_notes/rhdg-operator-releases |
5.12. The multipathd Interactive Console and the multipathd Command | 5.12. The multipathd Interactive Console and the multipathd Command The multipathd -k command is an interactive interface to the multipathd daemon. Executing this command brings up an interactive multipath console, from which you can issue a number of commands. After executing this command, you can enter help to get a list of available commands, you can enter an interactive command, or you can enter CTRL-D to quit. Note that you can issue any of the multipathd commands without entering interactive mode by using the following format. Some multipathd commands include a format option followed by a wildcard. You can display a list of available wildcards with the following command. The multipathd interactive console can be used to troubleshoot problems you may be having with your system. For example, the following command sequence displays the multipath configuration, including the defaults, before exiting the console. The following command sequence ensures that multipath has picked up any changes to the multipath.conf , Use the following command sequence to ensure that the path checker is working properly. As of Red Hat Enterprise Linux release 6.8, the multipathd command supports new format commands that show the status of multipath devices and paths in "raw" format versions. In raw format, no headers are printed and the fields are not padded to align the columns with the headers. Instead, the fields print exactly as specified in the format string. This output can then be more easily used for scripting. You can display the wildcards used in the format string with the multipathd show wildcards command. To following multipathd commands show the multipath devices that multipathd is monitoring, using a format string with multipath wildcards, in regular and raw format. To following multipathd commands show the paths that multipathd is monitoring, using a format string with multipath wildcards, in regular and raw format. The following commands show the difference between the non-raw and raw formats for the multipathd show maps . Note that in raw format there are no headers and only a single space between the columns. | [
"multipathd command argument",
"multipathd show wildcards",
"multipathd -k > > show config > > CTRL-D",
"multipathd -k > > reconfigure > > CTRL-D",
"multipathd -k > > show paths > > CTRL-D",
"list|show maps|multipaths format USDformat list|show maps|multipaths raw format USDformat",
"list|show paths format USDformat list|show paths raw format USDformat",
"multipathd show maps format \"%n %w %d %s\" name uuid sysfs vend/prod/rev mpathc 360a98000324669436c2b45666c567942 dm-0 NETAPP,LUN multipathd show maps raw format \"%n %w %d %s\" mpathc 360a98000324669436c2b45666c567942 dm-0 NETAPP,LUN"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/multipath_config_confirm |
Chapter 6. Troubleshooting common problems with distributed workloads for users | Chapter 6. Troubleshooting common problems with distributed workloads for users If you are experiencing errors in Red Hat OpenShift AI relating to distributed workloads, read this section to understand what could be causing the problem, and how to resolve the problem. If the problem is not documented here or in the release notes, contact Red Hat Support. 6.1. My Ray cluster is in a suspended state Problem The resource quota specified in the cluster queue configuration might be insufficient, or the resource flavor might not yet be created. Diagnosis The Ray cluster head pod or worker pods remain in a suspended state. Resolution In the OpenShift console, select your project from the Project list. Check the workload resource: Click Search , and from the Resources list, select Workload . Select the workload resource that is created with the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field, which provides the reason for the suspended state, as shown in the following example: status: conditions: - lastTransitionTime: '2024-05-29T13:05:09Z' message: 'couldn''t assign flavors to pod set small-group-jobtest12: insufficient quota for nvidia.com/gpu in flavor default-flavor in ClusterQueue' Check the Ray cluster resource: Click Search , and from the Resources list, select RayCluster . Select the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field. Check the cluster queue resource: Click Search , and from the Resources list, select ClusterQueue . Check your cluster queue configuration to ensure that the resources that you requested are within the limits defined for the project. Either reduce your requested resources, or contact your administrator to request more resources. 6.2. My Ray cluster is in a failed state Problem You might have insufficient resources. Diagnosis The Ray cluster head pod or worker pods are not running. When a Ray cluster is created, it initially enters a failed state. This failed state usually resolves after the reconciliation process completes and the Ray cluster pods are running. Resolution If the failed state persists, complete the following steps: In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select Pod . Click your pod name to open the pod details page. Click the Events tab, and review the pod events to identify the cause of the problem. If you cannot resolve the problem, contact your administrator to request assistance. 6.3. I see a failed to call webhook error message for the CodeFlare Operator Problem After you run the cluster.up() command, the following error is shown: ApiException: (500) Reason: Internal Server Error HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.ray.openshift.ai\": failed to call webhook: Post \"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"codeflare-operator-webhook-service\""}]},"code":500} Diagnosis The CodeFlare Operator pod might not be running. Resolution Contact your administrator to request assistance. 6.4. I see a failed to call webhook error message for Kueue Problem After you run the cluster.up() command, the following error is shown: ApiException: (500) Reason: Internal Server Error HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\"","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"mraycluster.kb.io\": failed to call webhook: Post \"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\": no endpoints available for service \"kueue-webhook-service\""}]},"code":500} Diagnosis The Kueue pod might not be running. Resolution Contact your administrator to request assistance. 6.5. My Ray cluster doesn't start Problem After you run the cluster.up() command, when you run either the cluster.details() command or the cluster.status() command, the Ray Cluster remains in the Starting status instead of changing to the Ready status. No pods are created. Diagnosis In the OpenShift console, select your project from the Project list. Check the workload resource: Click Search , and from the Resources list, select Workload . Select the workload resource that is created with the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field, which provides the reason for remaining in the Starting state. Check the Ray cluster resource: Click Search , and from the Resources list, select RayCluster . Select the Ray cluster resource, and click the YAML tab. Check the text in the status.conditions.message field. Resolution If you cannot resolve the problem, contact your administrator to request assistance. 6.6. I see a Default Local Queue ... not found error message Problem After you run the cluster.up() command, the following error is shown: Default Local Queue with kueue.x-k8s.io/default-queue: true annotation not found please create a default Local Queue or provide the local_queue name in Cluster Configuration. Diagnosis No default local queue is defined, and a local queue is not specified in the cluster configuration. Resolution In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select LocalQueue . Resolve the problem in one of the following ways: If a local queue exists, add it to your cluster configuration as follows: local_queue=" <local_queue_name> " If no local queue exists, contact your administrator to request assistance. 6.7. I see a local_queue provided does not exist error message Problem After you run the cluster.up() command, the following error is shown: local_queue provided does not exist or is not in this namespace. Please provide the correct local_queue name in Cluster Configuration. Diagnosis An incorrect value is specified for the local queue in the cluster configuration, or an incorrect default local queue is defined. The specified local queue either does not exist, or exists in a different namespace. Resolution In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select LocalQueue . Resolve the problem in one of the following ways: If a local queue exists, ensure that you spelled the local queue name correctly in your cluster configuration, and that the namespace value in the cluster configuration matches your project name. If you do not specify a namespace value in the cluster configuration, the Ray cluster is created in the current project. If no local queue exists, contact your administrator to request assistance. 6.8. I cannot create a Ray cluster or submit jobs Problem After you run the cluster.up() command, an error similar to the following error is shown: RuntimeError: Failed to get RayCluster CustomResourceDefinition: (403) Reason: Forbidden HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rayclusters.ray.io is forbidden: User \"system:serviceaccount:regularuser-project:regularuser-workbench\" cannot list resource \"rayclusters\" in API group \"ray.io\" in the namespace \"regularuser-project\"","reason":"Forbidden","details":{"group":"ray.io","kind":"rayclusters"},"code":403} Diagnosis The correct OpenShift login credentials are not specified in the TokenAuthentication section of your notebook code. Resolution Identify the correct OpenShift login credentials as follows: In the OpenShift console header, click your username and click Copy login command . In the new tab that opens, log in as the user whose credentials you want to use. Click Display Token . From the Log in with this token section, copy the token and server values. In your notebook code, specify the copied token and server values as follows: auth = TokenAuthentication( token = " <token> ", server = " <server> ", skip_tls=False ) auth.login() 6.9. My pod provisioned by Kueue is terminated before my image is pulled Problem Kueue waits for a period of time before marking a workload as ready, to enable all of the workload pods to become provisioned and running. By default, Kueue waits for 5 minutes. If the pod image is very large and is still being pulled after the 5-minute waiting period elapses, Kueue fails the workload and terminates the related pods. Diagnosis In the OpenShift console, select your project from the Project list. Click Search , and from the Resources list, select Pod . Click the Ray head pod name to open the pod details page. Click the Events tab, and review the pod events to check whether the image pull completed successfully. Resolution If the pod takes more than 5 minutes to pull the image, contact your administrator to request assistance. | [
"status: conditions: - lastTransitionTime: '2024-05-29T13:05:09Z' message: 'couldn''t assign flavors to pod set small-group-jobtest12: insufficient quota for nvidia.com/gpu in flavor default-flavor in ClusterQueue'",
"ApiException: (500) Reason: Internal Server Error HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Internal error occurred: failed calling webhook \\\"mraycluster.ray.openshift.ai\\\": failed to call webhook: Post \\\"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"codeflare-operator-webhook-service\\\"\",\"reason\":\"InternalError\",\"details\":{\"causes\":[{\"message\":\"failed calling webhook \\\"mraycluster.ray.openshift.ai\\\": failed to call webhook: Post \\\"https://codeflare-operator-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"codeflare-operator-webhook-service\\\"\"}]},\"code\":500}",
"ApiException: (500) Reason: Internal Server Error HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Internal error occurred: failed calling webhook \\\"mraycluster.kb.io\\\": failed to call webhook: Post \\\"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"kueue-webhook-service\\\"\",\"reason\":\"InternalError\",\"details\":{\"causes\":[{\"message\":\"failed calling webhook \\\"mraycluster.kb.io\\\": failed to call webhook: Post \\\"https://kueue-webhook-service.redhat-ods-applications.svc:443/mutate-ray-io-v1-raycluster?timeout=10s\\\": no endpoints available for service \\\"kueue-webhook-service\\\"\"}]},\"code\":500}",
"Default Local Queue with kueue.x-k8s.io/default-queue: true annotation not found please create a default Local Queue or provide the local_queue name in Cluster Configuration.",
"local_queue=\" <local_queue_name> \"",
"local_queue provided does not exist or is not in this namespace. Please provide the correct local_queue name in Cluster Configuration.",
"RuntimeError: Failed to get RayCluster CustomResourceDefinition: (403) Reason: Forbidden HTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"rayclusters.ray.io is forbidden: User \\\"system:serviceaccount:regularuser-project:regularuser-workbench\\\" cannot list resource \\\"rayclusters\\\" in API group \\\"ray.io\\\" in the namespace \\\"regularuser-project\\\"\",\"reason\":\"Forbidden\",\"details\":{\"group\":\"ray.io\",\"kind\":\"rayclusters\"},\"code\":403}",
"auth = TokenAuthentication( token = \" <token> \", server = \" <server> \", skip_tls=False ) auth.login()"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_distributed_workloads/troubleshooting-common-problems-with-distributed-workloads-for-users_distributed-workloads |
14.5.6. Retrieving Network Statistics | 14.5.6. Retrieving Network Statistics The domnetstat [domain][interface-device] command displays the network interface statistics for the specified device running on a given domain. | [
"domifstat rhel6 eth0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-retrieving_network_statistics |
Chapter 15. Viewing log entries | Chapter 15. Viewing log entries You can view log entries for Red Hat Fuse in the Logs tab. Prerequisite The Logs tab is available when the Java application includes the Log MBean. Procedure To view a list of the log entries, click the Log Entries tab. By default, the list shows log entries in ascending order. You can drill down to each log entry to view detailed information about the log entry. To filter the list of logs to show specific log types, click the Action Bar . You can filter the log entries section according to a text string or the logging level. To change the Fuse Console default settings: In the upper right corner of the Fuse Console, click the user icon and then click Preferences from the drop-down menu. To change the default sorting order, select Server Logs and then click the log entry link to drill down to details about the log entry, such as the bundle name, thread, and the full message text. Optionally, you can customize these settings for storing log messages: The number of log statements to keep in the Fuse Console (the default is 100). The global log level: INFO (the default), OFF, ERROR, WARN, and DEBUG. The child-level messages to include, such as hawtio-oauth and hawtio-core-utils . To reset the Fuse Console Logs settings to the default values, click Reset Reset settings . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/fuse-console-view-logs_karaf |
Chapter 2. Installing Red Hat build of OpenJDK 8 on Red Hat Enterprise Linux | Chapter 2. Installing Red Hat build of OpenJDK 8 on Red Hat Enterprise Linux Red Hat build of OpenJDK is an environment for developing and running a wide range of platform-agnostic applications, from mobile applications to desktop and web applications and enterprise systems. Red Hat provides an open source implementation of the Java Platform SE (Standard Edition) called Red Hat build of OpenJDK. Applications are developed using the JDK (Java Development Kit). Applications are run on a JVM (Java Virtual Machine), which is included in the JRE (Java Runtime Environment) and the JDK. There is also a headless version of Java which has the smallest footprint and does not include the libraries needed for a user interface. The headless version is packaged in the headless subpackage. Note If you are unsure whether you need the JRE or the JDK, it is recommended that you install the JDK. The following sections provide instructions for installing Red Hat build of OpenJDK on Red Hat Enterprise Linux. Note You can install multiple major versions of Red Hat build of OpenJDK on your local system. If you need to switch from one major version to another major version, issue the following command in your command-line interface (CLI) and then following the onscreen prompts: 2.1. Installing a JRE on RHEL by using yum You can install Red Hat build of OpenJDK Java Runtime Environment (JRE) using the system package manager, yum . Prerequisites Logged in as a user with root privileges on the system. Registered your local system to your Red Hat Subscription Management account. See the Registering a system using Red Hat Subscription Management user guide. Procedure Run the yum command, specifying the package you want to install: Check that the installation works: Note If the output from the command shows that you have a different major version of Red Hat build of OpenJDK checked out on your system, you can enter the following command in your CLI to switch your system to use Red Hat build of OpenJDK 8: 2.2. Installing a JRE on RHEL by using an archive You can install Red Hat build of OpenJDK Java Runtime Environment (JRE) by using an archive. This is useful if the Java administrator does not have root privileges. Note To ease the upgrades for later versions create a parent directory to contain your JREs and create a symbolic link to the latest JRE using a generic path. Procedure Create a directory to where you want to download the archive file, and then navigate to that directory on your command-line interface (CLI). For example: Navigate to the Software Downloads page on the Red Hat Customer Portal. Select the latest version of Red Hat build of OpenJDK 8 from the Version drop-down list, and then download the JRE archive for Linux to your local system. Extract the contents of the archive to a directory of your choice: Create a generic path by using symbolic links to your JRE for easier upgrades: Configure the JAVA_HOME environment variable: Verify that JAVA_HOME environment variable is set correctly: Note When installed using this method, Java will only be available for the current user. Add the bin directory of the generic JRE path to the PATH environment variable: Verify that java -version works without supplying the full path: Note You can ensure that JAVA_HOME environment variable persists for the current user by exporting the environment variable in ~/.bashrc . 2.3. Installing Red Hat build of OpenJDK on RHEL by using yum You can install Red Hat build of OpenJDK using the system package manager, yum . Prerequisites Log in as a user with root privileges. Registered your local system to your Red Hat Subscription Management account. See the Registering a system using Red Hat Subscription Management user guide. Procedure Run the yum command, specifying the package you want to install: Check that the installation works: 2.4. Installing Red Hat build of OpenJDK on RHEL by using an archive You can install Red Hat build of OpenJDK with an archive. This is useful if the Java administrator does not have root privileges. Note To ease upgrades, create a parent directory to contain your JREs and create a symbolic link to the latest JRE using a generic path. Procedure Create a directory to where you want to download the archive file, and then navigate to that directory on your command-line interface (CLI). For example: Navigate to the Software Downloads page on the Red Hat Customer Portal. Select the latest version of Red Hat build of OpenJDK 8 from the Version drop-down list, and then download the JDK archive for Linux to your local system. Extract the contents of the archive to a directory of your choice: Create a generic path by using symbolic links to your JDK for easier upgrades: Configure the JAVA_HOME environment variable: Verify that JAVA_HOME environment variable is set correctly: Note When installed using this method, Java will only be available for the current user. Add the bin directory of the generic JRE path to the PATH environment variable: Verify that java -version works without supplying the full path: Note You can ensure that JAVA_HOME environment variable persists for the current user by exporting the environment variable in ~/.bashrc . 2.5. Installing multiple major versions of Red Hat build of OpenJDK on RHEL by using yum You can install multiple versions of Red Hat build of OpenJDK using the system package manager, yum . Prerequisites A Red Hat Subscription Management (RHSM) account with an active subscription that provides access to a repository that provides the Red Hat build of OpenJDK you want to install. You must have root privileges on the system. Procedure Run the following yum commands to install the package: For Red Hat build of OpenJDK 17 For Red Hat build of OpenJDK 11 For Red Hat build of OpenJDK 8 After installing, check the available java versions: Check the current java version: Note If the output from the command shows that you have a different major version of Red Hat build of OpenJDK checked out on your system, you can enter the following command in your CLI to switch your system to use Red Hat build of OpenJDK 8: Additional resources For more information about configuring the default Java version, see Non-interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL . 2.6. Installing multiple major versions of Red Hat build of OpenJDK on RHEL by using an archive You can install multiple major versions of Red Hat build of OpenJDK by using the same procedures found in Installing a JRE on RHEL by using an archive or Installing Red Hat build of OpenJDK on RHEL 8 by using an archive using multiple major versions . Note For instructions how to configure the default Red Hat build of OpenJDK version for the system, see Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL . Additional resources For instructions on installing a JRE, see Installing a JRE on RHEL using an archive . For instructions on installing a JDK, see Installing Red Hat build of OpenJDK on RHEL using an archive . 2.7. Installing multiple minor versions of Red Hat build of OpenJDK on RHEL by using yum You can install multiple minor versions of Red Hat build of OpenJDK on RHEL. This is done by preventing the installed minor versions from being updated. Prerequisites Choose system-wide version of Red Hat build of OpenJDK from Non-interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL . Procedure Add the installonlypkgs option in the /etc/yum.conf directory to specify the Red Hat build of OpenJDK packages that yum can install but not update. Updates will install new packages while leaving the old versions on the system. The different minor versions of Red Hat build of OpenJDK can be found in the /usr/lib/jvm/ <minor version> files. For example, the following shows part of /usr/lib/jvm/java-1.8.0-openjdk-1.8.0 : 2.8. Installing multiple minor versions of Red Hat build of OpenJDK on RHEL by using an archive Installing multiple minor versions is the same as Installing a JRE on RHEL by using an archive or Installing Red Hat build of OpenJDK on RHEL 8 by using an archive using multiple minor versions. Note For instructions how to choose a default minor version for the system, see Non-interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL . Additional resources For instructions on installing a JRE, see Installing a JRE on RHEL using an archive . For instructions on installing a JDK, see Installing Red Hat build of OpenJDK on RHEL using an archive . | [
"sudo update-alternatives --config 'java'",
"sudo yum install java-1.8.0-openjdk",
"java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode)",
"sudo update-alternatives --config 'java'",
"mkdir ~/jres cd ~/jres",
"tar -xf java-1.8.0-openjdk-portable-1.8.0.322.b06-4.portable.jre.el7.x86_64.tar.xz -C ~/jres",
"ln -s ~/jres/java-1.8.0-openjdk-portable-1.8.0.322.b06-4.portable.jre.el7.x86_64 ~/jres/java-8",
"export JAVA_HOME=~/jres/java-8",
"printenv | grep JAVA_HOME JAVA_HOME=~/jres/java-8",
"export PATH=\"USDJAVA_HOME/bin:USDPATH\"",
"java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode)",
"sudo yum install java-1.8.0-openjdk-devel",
"javac -version javac 1.8.0_322",
"mkdir ~/jdks cd ~/Downloads",
"tar -xf java-1.8.0-openjdk-portable-1.8.0.322.b06-4.portable.jdk.el7.x86_64.tar.xz -C ~/jdks",
"ln -s ~/jdks/java-1.8.0-openjdk-portable-1.8.0.322.b06-4.portable.jdk.el7.x86_64 ~/jdks/java-8",
"export JAVA_HOME=~/jdks/java-8",
"printenv | grep JAVA_HOME JAVA_HOME=~/jdks/java-8",
"export PATH=\"USDJAVA_HOME/bin:USDPATH\"",
"java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode)",
"sudo yum install java-17-openjdk",
"sudo yum install java-11-openjdk",
"sudo yum install java-1.8.0-openjdk",
"sudo yum list installed \"java*\" Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.322.b06-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-11-openjdk.x86_64 1:11.0.14.0.9-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-17-openjdk.x86_64 1:17.0.2.0.8-4.el8_5 @rhel-8-for-x86_64-appstream-rpms",
"java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode)",
"sudo update-alternatives --config 'java'",
"installonlypkgs=java- <version> --openjdk,java- <version> --openjdk-headless,java- <version> --openjdk-devel",
"rpm -qa | grep java-1.8.0-openjdk java-1.8.0-java-1.8.0-openjdk-1.8.0.312.b07-2.el8_5.x86_64 java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5.x86_64",
"/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.322.b06-2.el8_5.x86_64/bin/java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode) /usr/lib/jvm/java-1.8.0-java-1.8.0-openjdk-1.8.0.312.b07-2.el8_5.x86_64/bin/java -version openjdk version \"1.8.0_312\" OpenJDK Runtime Environment (build 1.8.0_312-b07) OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_rhel/assembly_installing-openjdk-8-on-red-hat-enterprise-linux_openjdk |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/providing-feedback |
Configure data sources | Configure data sources Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configure_data_sources/index |
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/viewing-odf-topology_mcg-verify |
Chapter 5. Testing Clair | Chapter 5. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug. | [
"podman pull ubuntu:20.04",
"sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-testing |
A.11. numastat | A.11. numastat The numastat tool is provided by the numactl package, and displays memory statistics (such as allocation hits and misses) for processes and the operating system on a per-NUMA-node basis. The default tracking categories for the numastat command are outlined as follows: numa_hit The number of pages that were successfully allocated to this node. numa_miss The number of pages that were allocated on this node because of low memory on the intended node. Each numa_miss event has a corresponding numa_foreign event on another node. numa_foreign The number of pages initially intended for this node that were allocated to another node instead. Each numa_foreign event has a corresponding numa_miss event on another node. interleave_hit The number of interleave policy pages successfully allocated to this node. local_node The number of pages successfully allocated on this node, by a process on this node. other_node The number of pages allocated on this node, by a process on another node. Supplying any of the following options changes the displayed units to megabytes of memory (rounded to two decimal places), and changes other specific numastat behaviors as described below. -c Horizontally condenses the displayed table of information. This is useful on systems with a large number of NUMA nodes, but column width and inter-column spacing are somewhat unpredictable. When this option is used, the amount of memory is rounded to the nearest megabyte. -m Displays system-wide memory usage information on a per-node basis, similar to the information found in /proc/meminfo . -n Displays the same information as the original numastat command ( numa_hit , numa_miss , numa_foreign , interleave_hit , local_node , and other_node ), with an updated format, using megabytes as the unit of measurement. -p pattern Displays per-node memory information for the specified pattern. If the value for pattern is comprised of digits, numastat assumes that it is a numerical process identifier. Otherwise, numastat searches process command lines for the specified pattern. Command line arguments entered after the value of the -p option are assumed to be additional patterns for which to filter. Additional patterns expand, rather than narrow, the filter. -s Sorts the displayed data in descending order so that the biggest memory consumers (according to the total column) are listed first. Optionally, you can specify a node, and the table will be sorted according to the node column. When using this option, the node value must follow the -s option immediately, as shown here: Do not include white space between the option and its value. -v Displays more verbose information. Namely, process information for multiple processes will display detailed information for each process. -V Displays numastat version information. -z Omits table rows and columns with only zero values from the displayed information. Note that some near-zero values that are rounded to zero for display purposes will not be omitted from the displayed output. | [
"numastat -s2"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-numastat |
Chapter 5. Remote health monitoring | Chapter 5. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 5.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 5.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/monitoring_openshift_data_foundation/remote_health_monitoring |
5.9. Using a Beta Release with UEFI Secure Boot | 5.9. Using a Beta Release with UEFI Secure Boot Note This section only concerns Beta releases of Red Hat Enterprise Linux 7. The UEFI Secure Boot technology requires that the operating system kernel must be signed with a recognized private key in order to be able to boot. In every beta release of Red Hat Enterprise Linux 7, the kernel is signed with a Red Hat Beta-specific private key, which is different from the more common Red Hat key used to sign kernels in a General Availability (non-Beta) releases. The Beta private key will likely not be recognized by your hardware, which means that any Beta release of Red Hat Enterprise Linux 7 will not be able to boot. In order to use a Beta release with UEFI Secure Boot enabled, you need to add the Red Hat Beta public key to your system using the Machine Owner Key (MOK) facility. The procedure to add the Red Hat Beta key to your system is below. Procedure 5.1. Adding a Custom Private Key for UEFI Secure Boot First, disable UEFI Secure Boot on the system, and install Red Hat Enterprise Linux 7 normally. After the installation finishes, the system will reboot. Secure Boot should still be disabled at this point. Reboot the system, log in and, if applicable, go through the Initial Setup screens as described in Chapter 30, Initial Setup . After finishing the first boot and going through Initial Setup, install the kernel-doc package if not installed already: This package provides a certificate file which contains the Red Hat CA public Beta key, located in /usr/share/doc/kernel-keys/ kernel-version /kernel-signing-ca.cer , where kernel-version is the kernel version string without the platform architecture suffix - for example, 3.10.0-686.el7 . Execute the following commands to enroll the public key into the system Machine Owner Key (MOK) list: Enter a password of your choosing when prompted. Note Make sure to remember the password. It is required to finish this procedure as well as to remove the imported key when it is no longer needed. Reboot the system again. During startup you will be prompted to confirm that you want to complete the pending key enrollment request. Select yes, and provide the password which you set earlier using the mokutil command in the step. The system will reboot again after you do so, and the key will be imported into the system firmware. You can turn on Secure Boot on this or any subsequent reboot. Warning Remove the imported Beta public key when you no longer need it. If you install a final (General Availability) release of Red Hat Enterprise Linux 7, or when you install a different operating system, you should remove the imported key. If you have only imported this public key, you can use the following command to reset the MOK: After the reboot, the firmware will prompt you for a confirmation and the password you created when importing the key. The key will be removed from the MOK after providing the correct password, and the system will revert to its original state. | [
"yum install kernel-doc",
"kr=USD(uname -r) # mokutil --import /usr/share/doc/kernel-keys/USD{kr%.USD(uname -p)}/kernel-signing-ca.cer",
"mokutil --reset"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-planning-beta-secure-boot |
Chapter 6. Expanding persistent volumes | Chapter 6. Expanding persistent volumes 6.1. Enabling volume expansion support Before you can expand persistent volumes, the StorageClass object must have the allowVolumeExpansion field set to true . Procedure Edit the StorageClass object and add the allowVolumeExpansion attribute. The following example demonstrates adding this line at the bottom of the storage class configuration. apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1 1 Setting this attribute to true allows PVCs to be expanded after creation. 6.2. Expanding CSI volumes You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created. OpenShift Container Platform supports CSI volume expansion by default. However, a specific CSI driver is required. OpenShift Container Platform 4.7 supports version 1.1.0 of the CSI specification . Important Expanding CSI volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 6.3. Expanding FlexVolume with a supported driver When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform. FlexVolume allows expansion if the driver is set with RequiresFSResize to true . The FlexVolume can be expanded on pod restart. Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. Prerequisites The underlying volume driver supports resize. The driver is set with the RequiresFSResize capability to true . Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . Procedure To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin interface using these methods: RequiresFSResize If true , updates the capacity directly. If false , calls the ExpandFS method to finish the filesystem resize. ExpandFS If true , calls ExpandFS to resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize. Important Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes (also known as the master nodes), it does not support control-plane expansion of FlexVolume. 6.4. Expanding persistent volume claims (PVCs) with a file system Expanding PVCs based on volume types that need file system resizing, such as GCE PD, EBS, and Cinder, is a two-step process. This process involves expanding volume objects in the cloud provider, and then expanding the file system on the actual node. Expanding the file system on the node only happens when a new pod is started with the volume. Prerequisites The controlling StorageClass object must have allowVolumeExpansion set to true . Procedure Edit the PVC and request a new size by editing spec.resources.requests . For example, the following expands the ebs PVC to 8 Gi. kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1 1 Updating spec.resources.requests to a larger amount will expand the PVC. After the cloud provider object has finished resizing, the PVC is set to FileSystemResizePending . Check the condition by entering the following command: USD oc describe pvc <pvc_name> When the cloud provider object has finished resizing, the PersistentVolume object reflects the newly requested size in PersistentVolume.Spec.Capacity . At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the FileSystemResizePending condition is removed from the PVC. 6.5. Recovering from failure when expanding volumes If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention. Procedure Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain . Delete the PVC. This will be recreated later. To ensure that the newly created PVC can bind to the PV marked Retain , manually edit the PV and delete the claimRef entry from the PV specs. This marks the PV as Available . Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only. Restore the reclaim policy on the PV. | [
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/storage/expanding-persistent-volumes |
Chapter 21. consistency | Chapter 21. consistency This chapter describes the commands under the consistency command. 21.1. consistency group add volume Add volume(s) to consistency group Usage: Table 21.1. Positional Arguments Value Summary <consistency-group> Consistency group to contain <volume> (name or id) <volume> Volume(s) to add to <consistency-group> (name or id) (repeat option to add multiple volumes) Table 21.2. Optional Arguments Value Summary -h, --help Show this help message and exit 21.2. consistency group create Create new consistency group. Usage: Table 21.3. Positional Arguments Value Summary <name> Name of new consistency group (default to none) Table 21.4. Optional Arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type of this consistency group (name or id) --consistency-group-source <consistency-group> Existing consistency group (name or id) --consistency-group-snapshot <consistency-group-snapshot> Existing consistency group snapshot (name or id) --description <description> Description of this consistency group --availability-zone <availability-zone> Availability zone for this consistency group (not available if creating consistency group from source) Table 21.5. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 21.6. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.7. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 21.8. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 21.3. consistency group delete Delete consistency group(s). Usage: Table 21.9. Positional Arguments Value Summary <consistency-group> Consistency group(s) to delete (name or id) Table 21.10. Optional Arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 21.4. consistency group list List consistency groups. Usage: Table 21.11. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show details for all projects. admin only. (defaults to False) --long List additional fields in output Table 21.12. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 21.13. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 21.14. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.15. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 21.5. consistency group remove volume Remove volume(s) from consistency group Usage: Table 21.16. Positional Arguments Value Summary <consistency-group> Consistency group containing <volume> (name or id) <volume> Volume(s) to remove from <consistency-group> (name or ID) (repeat option to remove multiple volumes) Table 21.17. Optional Arguments Value Summary -h, --help Show this help message and exit 21.6. consistency group set Set consistency group properties Usage: Table 21.18. Positional Arguments Value Summary <consistency-group> Consistency group to modify (name or id) Table 21.19. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New consistency group name --description <description> New consistency group description 21.7. consistency group show Display consistency group details. Usage: Table 21.20. Positional Arguments Value Summary <consistency-group> Consistency group to display (name or id) Table 21.21. Optional Arguments Value Summary -h, --help Show this help message and exit Table 21.22. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 21.23. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.24. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 21.25. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 21.8. consistency group snapshot create Create new consistency group snapshot. Usage: Table 21.26. Positional Arguments Value Summary <snapshot-name> Name of new consistency group snapshot (default to None) Table 21.27. Optional Arguments Value Summary -h, --help Show this help message and exit --consistency-group <consistency-group> Consistency group to snapshot (name or id) (default to be the same as <snapshot-name>) --description <description> Description of this consistency group snapshot Table 21.28. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 21.29. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.30. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 21.31. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 21.9. consistency group snapshot delete Delete consistency group snapshot(s). Usage: Table 21.32. Positional Arguments Value Summary <consistency-group-snapshot> Consistency group snapshot(s) to delete (name or id) Table 21.33. Optional Arguments Value Summary -h, --help Show this help message and exit 21.10. consistency group snapshot list List consistency group snapshots. Usage: Table 21.34. Optional Arguments Value Summary -h, --help Show this help message and exit --all-projects Show detail for all projects (admin only) (defaults to False) --long List additional fields in output --status <status> Filters results by a status ("available", "error", "creating", "deleting" or "error_deleting") --consistency-group <consistency-group> Filters results by a consistency group (name or id) Table 21.35. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 21.36. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 21.37. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.38. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 21.11. consistency group snapshot show Display consistency group snapshot details Usage: Table 21.39. Positional Arguments Value Summary <consistency-group-snapshot> Consistency group snapshot to display (name or id) Table 21.40. Optional Arguments Value Summary -h, --help Show this help message and exit Table 21.41. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 21.42. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 21.43. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 21.44. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack consistency group add volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] (--volume-type <volume-type> | --consistency-group-source <consistency-group> | --consistency-group-snapshot <consistency-group-snapshot>) [--description <description>] [--availability-zone <availability-zone>] [<name>]",
"openstack consistency group delete [-h] [--force] <consistency-group> [<consistency-group> ...]",
"openstack consistency group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--long]",
"openstack consistency group remove volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group set [-h] [--name <name>] [--description <description>] <consistency-group>",
"openstack consistency group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group>",
"openstack consistency group snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consistency-group <consistency-group>] [--description <description>] [<snapshot-name>]",
"openstack consistency group snapshot delete [-h] <consistency-group-snapshot> [<consistency-group-snapshot> ...]",
"openstack consistency group snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all-projects] [--long] [--status <status>] [--consistency-group <consistency-group>]",
"openstack consistency group snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group-snapshot>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/consistency |
function::tz_gmtoff | function::tz_gmtoff Name function::tz_gmtoff - Return local time zone offset Synopsis Arguments None Description Returns the local time zone offset (seconds west of UTC), as passed by staprun at script startup only. | [
"tz_gmtoff()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tz-gmtoff |
Chapter 5. Java security manager | Chapter 5. Java security manager By defining a Java security policy you can configure the Java Security Manager to manage the external boundary of the Java Virtual Machine (JVM). 5.1. About the Java security manager The Java Security Manager is a class that manages the external boundary of the Java Virtual Machine (JVM) sandbox, controlling how code executing within the JVM can interact with resources outside the JVM. When the Java Security Manager is activated, the Java API checks with the security manager for approval before executing a wide range of potentially unsafe operations. The Java Security Manager uses a security policy to determine whether a given action will be allowed or denied. 5.2. About Java security policy A Java security policy is a set of defined permissions for different classes of code. The Java Security Manager compares actions requested by applications against the security policy. If an action is allowed by the policy, the Security Manager will permit that action to take place. If the action is not allowed by the policy, the Security Manager will deny that action. Important versions of JBoss EAP defined policies using an external file, e.g. EAP_HOME /bin/server.policy . JBoss EAP 7 defines Java Security Policies in two ways: the security-manager subsystem and through XML files in the individual deployments. The security-manager subsystem defines minimum and maximum permission for ALL deployments, while the XML files specify the permissions requested by the individual deployment. 5.2.1. About defining policies in the security manager subsystem The security-manager subsystem allows you do define shared or common permissions for all deployments. This is accomplished by defining minimum and maximum permission sets. All deployments will be granted at the least all permissions defined in the minimum permission. The deployment process fails for a deployment if it requests a permission that exceeds the ones defined in the maximum permission set. Example: Management CLI command for updating minimum permission set /subsystem=security-manager/deployment-permissions=default:write-attribute(name=minimum-permissions, value=[{class="java.util.PropertyPermission", actions="read", name="*"}]) Example: Management CLI command for updating maximum permission set /subsystem=security-manager/deployment-permissions=default:write-attribute(name=maximum-permissions, value=[{class="java.util.PropertyPermission", actions="read,write", name="*"}, {class="java.io.FilePermission", actions="read,write", name="/-"}]) Note If the maximum permission set is not defined, its value defaults to java.security.AllPermission . Additional resources You can find a full reference of the security-manager subsystem in the JBoss EAP Configuration Guide . 5.2.2. About defining policies in the deployment In JBoss EAP 7, you can add a META-INF/permissions.xml to your deployment. This file allows you to specify the permissions needed by the deployment. If a minimum permissions set is defined in the security-manager subsystem and a META-INF/permissions.xml is added to your deployment, then the union of those permissions is granted. If the permissions requested in the permissions.xml exceed the maximum policies defined in the security-manager subsystem, its deployment will not succeed. If both META-INF/permissions.xml and META-INF/jboss-permissions.xml are present in the deployment, then only the permissions requested in the META-INF/jboss-permissions.xml are granted. The specification dictates that permissions.xml cover the entire application or top-level deployment module. In cases where you wish to define specific permissions for a subdeployment, you can use the JBoss EAP-specific META-INF/jboss-permissions.xml . It follows the same exact format as permissions.xml and will apply only to the deployment module in which it is declared. Example: Sample permissions.xml <permissions version="7"> <permission> <class-name>java.util.PropertyPermission</class-name> <name>*</name> <actions>read</actions> </permission> </permissions> Additional resources JSR 342 META-INF/permissions.xml file . 5.2.3. About defining policies in modules You can restrict the permissions of a module by adding a <permissions> element to the module.xml file. The <permissions> element contains zero or more <grant> elements, which define the permission to grant to the module. Each <grant> element contains the following attributes: permission The qualified class name of the permission to grant. name The permission name to provide to the permission class constructor. actions The (optional) list of actions, required by some permission types. Example: module.xml with Defined Policies <module xmlns="urn:jboss:module:1.5" name="org.jboss.test.example"> <permissions> <grant permission="java.util.PropertyPermission" name="*" actions="read,write" /> <grant permission="java.io.FilePermission" name="/etc/-" actions="read" /> </permissions> ... </module> If the <permissions> element is present, the module will be restricted to only the permissions you have listed. If the <permissions> element is not present, there will be no restrictions on the module. 5.3. Run JBoss EAP with the Java security manager You can run JBoss EAP with the Java Security Manager in two different ways. There are two ways to run the Java Security Manager: Using the -secmgr flag with startup configuration script. Using the Startup Configuration File. Important version of JBoss EAP allowed for the use of the -Djava.security.manager Java system property as well as custom security managers. Neither of these are supported in JBoss EAP 7. In addition, the Java Security Manager policies are now defined within the security-manager subsystem, meaning external policy files and the -Djava.security.policy Java system property are not supported JBoss EAP 7. Important Before starting JBoss EAP with the Java Security Manager enabled, you need make sure all security policies are defined in the security-manager subsystem. 5.3.1. Using the -secmgr flag with startup configuration script. You can run JBoss EAP with the Java Security Manager. To do this, use the secmgr option during startup. Procedure Include the -secmgr flag when starting up your JBoss EAP instance. Example of how to include the -secmgr flag 5.3.2. Using the startup configuration file You can run JBoss EAP with the Java Security Manager. To do this, you have to modify the startup configuration file. Important The domain or standalone server must be completely stopped before you edit any configuration files. Note If you are using JBoss EAP in a managed domain, you must perform the following procedure on each physical host or instance in your domain. Procedure Enable the Java Security Manager using the startup configuration file, you need to edit either the standalone.conf or domain.conf file, depending if you are running a standalone instance or managed domain. If running in Windows, the standalone.conf.bat or domain.conf.bat files are used instead. Uncomment the SECMGR="true" line in the configuration file: Example standalone.conf or domain.conf Example standalone.conf.bat or domain.conf.bat 5.4. Considerations before moving from versions When moving applications from a version of JBoss EAP to JBoss EAP 7 running with the Java Security Manager enabled, you need to be aware of the changes in how policies are defined as well as the necessary configuration needed with both the JBoss EAP configuration and the deployment. Here are the changes that you should be aware of: In versions of JBoss EAP, policies were defined in an external configuration file. In JBoss EAP 7, policies are defined using the security-manager subsystem and with permissions.xml or jboss-permissions.xml contained in the deployment. You could use -Djava.security.manager and -Djava.security.policy Java system properties during JBoss EAP startup In versions of JBoss EAP. These are no longer supported and the secmgr flag should be used instead to enable JBoss EAP to run with the Java Security Manager. Custom security managers are not supported in JBoss EAP 7. Additional resources Defining a Java Security Policy . How to run JBoss EAP with the Java Security Manager . | [
"/subsystem=security-manager/deployment-permissions=default:write-attribute(name=minimum-permissions, value=[{class=\"java.util.PropertyPermission\", actions=\"read\", name=\"*\"}])",
"/subsystem=security-manager/deployment-permissions=default:write-attribute(name=maximum-permissions, value=[{class=\"java.util.PropertyPermission\", actions=\"read,write\", name=\"*\"}, {class=\"java.io.FilePermission\", actions=\"read,write\", name=\"/-\"}])",
"<permissions version=\"7\"> <permission> <class-name>java.util.PropertyPermission</class-name> <name>*</name> <actions>read</actions> </permission> </permissions>",
"<module xmlns=\"urn:jboss:module:1.5\" name=\"org.jboss.test.example\"> <permissions> <grant permission=\"java.util.PropertyPermission\" name=\"*\" actions=\"read,write\" /> <grant permission=\"java.io.FilePermission\" name=\"/etc/-\" actions=\"read\" /> </permissions> </module>",
"./standalone.sh -secmgr",
"Uncomment this to run with a security manager enabled SECMGR=\"true\"",
"rem # Uncomment this to run with a security manager enabled set \"SECMGR=true\""
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/assembly_java-security-manager_credential-stores-in-elytron |
Chapter 11. AWS Simple Queue Service (SQS) | Chapter 11. AWS Simple Queue Service (SQS) Both producer and consumer are supported The AWS2 SQS component supports sending and receiving messages to Amazon's SQS service . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SQS. More information is available at Amazon SQS . 11.1. Dependencies When using aws2-sqs with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sqs-starter</artifactId> </dependency> 11.2. URI Format The queue will be created if they don't already exists. You can append query options to the URI in the following format, ?options=value&option2=value&... 11.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 11.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 11.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 11.4. Component Options The AWS Simple Queue Service (SQS) component supports 43 options, which are listed below. Name Description Default Type amazonAWSHost (common) The hostname of the Amazon AWS cloud. amazonaws.com String amazonSQSClient (common) Autowired To use the AmazonSQS as client. SqsClient autoCreateQueue (common) Setting the autocreation of the queue. false boolean configuration (common) The AWS SQS default configuration. Sqs2Configuration overrideEndpoint (common) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean protocol (common) The underlying protocol used to communicate with SQS. https String proxyProtocol (common) To define a proxy protocol when instantiating the SQS client. Enum values: HTTP HTTPS HTTPS Protocol queueOwnerAWSAccountId (common) Specify the queue owner aws account id when you need to connect the queue with different account owner. String region (common) The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (common) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (common) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (common) Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean attributeNames (consumer) A list of attribute names to receive when consuming. Multiple names can be separated by comma. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Allows you to use multiple threads to poll the sqs queue to increase throughput. 1 int defaultVisibilityTimeout (consumer) The default visibility timeout (in seconds). Integer deleteAfterRead (consumer) Delete message from SQS after it has been read. true boolean deleteIfFiltered (consumer) Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. true boolean extendMessageVisibility (consumer) If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. false boolean kmsDataKeyReusePeriodSeconds (consumer) The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). Integer kmsMasterKeyId (consumer) The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. String messageAttributeNames (consumer) A list of message attribute names to receive when consuming. Multiple names can be separated by comma. String serverSideEncryptionEnabled (consumer) Define if Server Side Encryption is enabled or not on the queue. false boolean visibilityTimeout (consumer) The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. Integer waitTimeSeconds (consumer) Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. Integer batchSeparator (producer) Set the separator when passing a String to send batch message operation. , String delaySeconds (producer) Delay sending messages for a number of seconds. Integer lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String operation (producer) The operation to do in case the user don't want to send only a message. Enum values: sendBatchMessage deleteMessage listQueues purgeQueue deleteQueue Sqs2Operations autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean delayQueue (advanced) Define if you want to apply delaySeconds option to the queue or on single messages. false boolean queueUrl (advanced) To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. String proxyHost (proxy) To define a proxy host when instantiating the SQS client. String proxyPort (proxy) To define a proxy port when instantiating the SQS client. Integer maximumMessageSize (queue) The maximumMessageSize (in bytes) an SQS message can contain for this queue. Integer messageRetentionPeriod (queue) The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. Integer policy (queue) The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String receiveMessageWaitTimeSeconds (queue) If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. Integer redrivePolicy (queue) Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. String accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 11.5. Endpoint Options The AWS Simple Queue Service (SQS) endpoint is configured using URI syntax: with the following path and query parameters: 11.5.1. Path Parameters (1 parameters) Name Description Default Type queueNameOrArn (common) Required Queue name or ARN. String 11.5.2. Query Parameters (61 parameters) Name Description Default Type amazonAWSHost (common) The hostname of the Amazon AWS cloud. amazonaws.com String amazonSQSClient (common) Autowired To use the AmazonSQS as client. SqsClient autoCreateQueue (common) Setting the autocreation of the queue. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to map headers to/from Camel. HeaderFilterStrategy overrideEndpoint (common) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean protocol (common) The underlying protocol used to communicate with SQS. https String proxyProtocol (common) To define a proxy protocol when instantiating the SQS client. Enum values: HTTP HTTPS HTTPS Protocol queueOwnerAWSAccountId (common) Specify the queue owner aws account id when you need to connect the queue with different account owner. String region (common) The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (common) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (common) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (common) Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false boolean attributeNames (consumer) A list of attribute names to receive when consuming. Multiple names can be separated by comma. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Allows you to use multiple threads to poll the sqs queue to increase throughput. 1 int defaultVisibilityTimeout (consumer) The default visibility timeout (in seconds). Integer deleteAfterRead (consumer) Delete message from SQS after it has been read. true boolean deleteIfFiltered (consumer) Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. true boolean extendMessageVisibility (consumer) If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs. false boolean kmsDataKeyReusePeriodSeconds (consumer) The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). Integer kmsMasterKeyId (consumer) The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. String maxMessagesPerPoll (consumer) Gets the maximum number of messages as a limit to poll at each polling. Is default unlimited, but use 0 or negative number to disable it as unlimited. int messageAttributeNames (consumer) A list of message attribute names to receive when consuming. Multiple names can be separated by comma. String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean serverSideEncryptionEnabled (consumer) Define if Server Side Encryption is enabled or not on the queue. false boolean visibilityTimeout (consumer) The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. Integer waitTimeSeconds (consumer) Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. Integer exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy batchSeparator (producer) Set the separator when passing a String to send batch message operation. , String delaySeconds (producer) Delay sending messages for a number of seconds. Integer lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageDeduplicationIdStrategy (producer) Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values: useExchangeId useContentBasedDeduplication useExchangeId String messageGroupIdStrategy (producer) Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values: useConstant useExchangeId usePropertyValue String operation (producer) The operation to do in case the user don't want to send only a message. Enum values: sendBatchMessage deleteMessage listQueues purgeQueue deleteQueue Sqs2Operations delayQueue (advanced) Define if you want to apply delaySeconds option to the queue or on single messages. false boolean queueUrl (advanced) To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. String proxyHost (proxy) To define a proxy host when instantiating the SQS client. String proxyPort (proxy) To define a proxy port when instantiating the SQS client. Integer maximumMessageSize (queue) The maximumMessageSize (in bytes) an SQS message can contain for this queue. Integer messageRetentionPeriod (queue) The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. Integer policy (queue) The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String receiveMessageWaitTimeSeconds (queue) If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. Integer redrivePolicy (queue) Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required SQS component options You have to provide the amazonSQSClient in the Registry or your accessKey and secretKey to access the Amazon's SQS . 11.6. Batch Consumer This component implements the Batch Consumer. This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages. 11.7. Usage 11.7.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation 11.7.2. Message headers set by the SQS producer Header Type Description CamelAwsSqsMD5OfBody String The MD5 checksum of the Amazon SQS message. CamelAwsSqsMessageId String The Amazon SQS message ID. CamelAwsSqsDelaySeconds Integer The delay seconds that the Amazon SQS message can be see by others. 11.7.3. Message headers set by the SQS consumer Header Type Description CamelAwsSqsMD5OfBody String The MD5 checksum of the Amazon SQS message. CamelAwsSqsMessageId String The Amazon SQS message ID. CamelAwsSqsReceiptHandle String The Amazon SQS message receipt handle. CamelAwsSqsMessageAttributes Map<String, String> The Amazon SQS message attributes. 11.7.4. Advanced AmazonSQS configuration If your Camel Application is running behind a firewall or if you need to have more control over the SqsClient instance configuration, you can create your own instance: from("aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result"); 11.7.5. Creating or updating an SQS Queue In the SQS Component, when an endpoint is started, a check is executed to obtain information about the existence of the queue or not. You're able to customize the creation through the QueueAttributeName mapping with the SQSConfiguration option. from("aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result"); In this example if the MyQueue queue is not already created on AWS (and the autoCreateQueue option is set to true), it will be created with default parameters from the SQS configuration. If it's already up on AWS, the SQS configuration options will be used to override the existent AWS configuration. 11.7.6. DelayQueue VS Delay for Single message When the option delayQueue is set to true, the SQS Queue will be a DelayQueue with the DelaySeconds option as delay. For more information about DelayQueue you can read the AWS SQS documentation . One important information to take into account is the following: For standard queues, the per-queue delay setting is not retroactive-changing the setting doesn't affect the delay of messages already in the queue. For FIFO queues, the per-queue delay setting is retroactive-changing the setting affects the delay of messages already in the queue. as stated in the official documentation. If you want to specify a delay on single messages, you can ignore the delayQueue option, while you can set this option to true, if you need to add a fixed delay to all messages enqueued. 11.7.7. Server Side Encryption There is a set of Server Side Encryption attributes for a queue. The related option are serverSideEncryptionEnabled , keyMasterKeyId and kmsDataKeyReusePeriod . The SSE is disabled by default. You need to explicitly set the option to true and set the related parameters as queue attributes. 11.8. JMS-style Selectors SQS does not allow selectors, but you can effectively achieve this by using the Camel Filter EIP and setting an appropriate visibilityTimeout . When SQS dispatches a message, it will wait up to the visibility timeout before it will try to dispatch the message to a different consumer unless a DeleteMessage is received. By default, Camel will always send the DeleteMessage at the end of the route, unless the route ended in failure. To achieve appropriate filtering and not send the DeleteMessage even on successful completion of the route, use a Filter: from("aws2-sqs://MyQueue?amazonSQSClient=#client&defaultVisibilityTimeout=5000&deleteIfFiltered=false&deleteAfterRead=false") .filter("USD{header.login} == true") .setProperty(Sqs2Constants.SQS_DELETE_FILTERED, constant(true)) .to("mock:filter"); In the above code, if an exchange doesn't have an appropriate header, it will not make it through the filter AND also not be deleted from the SQS queue. After 5000 milliseconds, the message will become visible to other consumers. Note we must set the property Sqs2Constants.SQS_DELETE_FILTERED to true to instruct Camel to send the DeleteMessage , if being filtered. 11.9. Available Producer Operations single message (default) sendBatchMessage deleteMessage listQueues 11.10. Send Message You can set a SendMessageBatchRequest or an Iterable from("direct:start") .setBody(constant("Camel rocks!")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); 11.11. Send Batch Message You can set a SendMessageBatchRequest or an Iterable from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("sendBatchMessage")) .process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Collection c = new ArrayList(); c.add("team1"); c.add("team2"); c.add("team3"); c.add("team4"); exchange.getIn().setBody(c); } }) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); As result you'll get an exchange containing a SendMessageBatchResponse instance, that you can examinate to check what messages were successfull and what not. The id set on each message of the batch will be a Random UUID. 11.12. Delete single Message Use deleteMessage operation to delete a single message. You'll need to set a receipt handle header for the message you want to delete. from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("deleteMessage")) .setHeader(SqsConstants.RECEIPT_HANDLE, constant("123456")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); As result you'll get an exchange containing a DeleteMessageResponse instance, that you can use to check if the message was deleted or not. 11.13. List Queues Use listQueues operation to list queues. from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("listQueues")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); As result you'll get an exchange containing a ListQueuesResponse instance, that you can examinate to check the actual queues. 11.14. Purge Queue Use purgeQueue operation to purge queue. from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("purgeQueue")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1"); As result you'll get an exchange containing a PurgeQueueResponse instance. 11.15. Queue Autocreation With the option autoCreateQueue users are able to avoid the autocreation of an SQS Queue in case it doesn't exist. The default for this option is true . If set to false any operation on a not-existent queue in AWS won't be successful and an error will be returned. 11.16. Send Batch Message and Message Deduplication Strategy In case you're using a SendBatchMessage Operation, you can set two different kind of Message Deduplication Strategy: - useExchangeId - useContentBasedDeduplication The first one will use a ExchangeIdMessageDeduplicationIdStrategy , that will use the Exchange ID as parameter The other one will use a NullMessageDeduplicationIdStrategy , that will use the body as deduplication element. In case of send batch message operation, you'll need to use the useContentBasedDeduplication and on the Queue you're pointing you'll need to enable the content based deduplication option. 11.17. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sqs</artifactId> <version>USD{camel-version}</version> </dependency> where {camel-version} must be replaced by the actual version of Camel. 11.18. Spring Boot Auto-Configuration The component supports 44 options, which are listed below. Name Description Default Type camel.component.aws2-sqs.access-key Amazon AWS Access Key. String camel.component.aws2-sqs.amazon-a-w-s-host The hostname of the Amazon AWS cloud. amazonaws.com String camel.component.aws2-sqs.amazon-s-q-s-client To use the AmazonSQS as client. The option is a software.amazon.awssdk.services.sqs.SqsClient type. SqsClient camel.component.aws2-sqs.attribute-names A list of attribute names to receive when consuming. Multiple names can be separated by comma. String camel.component.aws2-sqs.auto-create-queue Setting the autocreation of the queue. false Boolean camel.component.aws2-sqs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-sqs.batch-separator Set the separator when passing a String to send batch message operation. , String camel.component.aws2-sqs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.aws2-sqs.concurrent-consumers Allows you to use multiple threads to poll the sqs queue to increase throughput. 1 Integer camel.component.aws2-sqs.configuration The AWS SQS default configuration. The option is a org.apache.camel.component.aws2.sqs.Sqs2Configuration type. Sqs2Configuration camel.component.aws2-sqs.default-visibility-timeout The default visibility timeout (in seconds). Integer camel.component.aws2-sqs.delay-queue Define if you want to apply delaySeconds option to the queue or on single messages. false Boolean camel.component.aws2-sqs.delay-seconds Delay sending messages for a number of seconds. Integer camel.component.aws2-sqs.delete-after-read Delete message from SQS after it has been read. true Boolean camel.component.aws2-sqs.delete-if-filtered Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. true Boolean camel.component.aws2-sqs.enabled Whether to enable auto configuration of the aws2-sqs component. This is enabled by default. Boolean camel.component.aws2-sqs.extend-message-visibility If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs. false Boolean camel.component.aws2-sqs.kms-data-key-reuse-period-seconds The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). Integer camel.component.aws2-sqs.kms-master-key-id The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. String camel.component.aws2-sqs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-sqs.maximum-message-size The maximumMessageSize (in bytes) an SQS message can contain for this queue. Integer camel.component.aws2-sqs.message-attribute-names A list of message attribute names to receive when consuming. Multiple names can be separated by comma. String camel.component.aws2-sqs.message-deduplication-id-strategy Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. useExchangeId String camel.component.aws2-sqs.message-group-id-strategy Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. String camel.component.aws2-sqs.message-retention-period The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. Integer camel.component.aws2-sqs.operation The operation to do in case the user don't want to send only a message. Sqs2Operations camel.component.aws2-sqs.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-sqs.policy The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.aws2-sqs.protocol The underlying protocol used to communicate with SQS. https String camel.component.aws2-sqs.proxy-host To define a proxy host when instantiating the SQS client. String camel.component.aws2-sqs.proxy-port To define a proxy port when instantiating the SQS client. Integer camel.component.aws2-sqs.proxy-protocol To define a proxy protocol when instantiating the SQS client. Protocol camel.component.aws2-sqs.queue-owner-a-w-s-account-id Specify the queue owner aws account id when you need to connect the queue with different account owner. String camel.component.aws2-sqs.queue-url To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. String camel.component.aws2-sqs.receive-message-wait-time-seconds If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. Integer camel.component.aws2-sqs.redrive-policy Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. String camel.component.aws2-sqs.region The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-sqs.secret-key Amazon AWS Secret Key. String camel.component.aws2-sqs.server-side-encryption-enabled Define if Server Side Encryption is enabled or not on the queue. false Boolean camel.component.aws2-sqs.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-sqs.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-sqs.use-default-credentials-provider Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. false Boolean camel.component.aws2-sqs.visibility-timeout The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. Integer camel.component.aws2-sqs.wait-time-seconds Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sqs-starter</artifactId> </dependency>",
"aws2-sqs://queueNameOrArn[?options]",
"aws2-sqs:queueNameOrArn",
"from(\"aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5\") .to(\"mock:result\");",
"from(\"aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5\") .to(\"mock:result\");",
"from(\"aws2-sqs://MyQueue?amazonSQSClient=#client&defaultVisibilityTimeout=5000&deleteIfFiltered=false&deleteAfterRead=false\") .filter(\"USD{header.login} == true\") .setProperty(Sqs2Constants.SQS_DELETE_FILTERED, constant(true)) .to(\"mock:filter\");",
"from(\"direct:start\") .setBody(constant(\"Camel rocks!\")) .to(\"aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1\");",
"from(\"direct:start\") .setHeader(SqsConstants.SQS_OPERATION, constant(\"sendBatchMessage\")) .process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Collection c = new ArrayList(); c.add(\"team1\"); c.add(\"team2\"); c.add(\"team3\"); c.add(\"team4\"); exchange.getIn().setBody(c); } }) .to(\"aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1\");",
"from(\"direct:start\") .setHeader(SqsConstants.SQS_OPERATION, constant(\"deleteMessage\")) .setHeader(SqsConstants.RECEIPT_HANDLE, constant(\"123456\")) .to(\"aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1\");",
"from(\"direct:start\") .setHeader(SqsConstants.SQS_OPERATION, constant(\"listQueues\")) .to(\"aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1\");",
"from(\"direct:start\") .setHeader(SqsConstants.SQS_OPERATION, constant(\"purgeQueue\")) .to(\"aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sqs</artifactId> <version>USD{camel-version}</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-sqs-component-starter |
5.5.5. Restrict Permissions for Executable Directories | 5.5.5. Restrict Permissions for Executable Directories Be certain to only assign write permissions to the root user for any directory containing scripts or CGIs. This can be accomplished by typing the following commands: Also, always verify that any scripts running on the system work as intended before putting them into production. | [
"chown root <directory_name> chmod 755 <directory_name>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-http-cgi |
function::read_stopwatch_us | function::read_stopwatch_us Name function::read_stopwatch_us - Reads the time in microseconds for a stopwatch Synopsis Arguments name stopwatch name Description Returns time in microseconds for stopwatch name . Creates stopwatch name if it does not currently exist. | [
"read_stopwatch_us:long(name:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-read-stopwatch-us |
11.4. Setting up a Kerberos Client for Smart Cards | 11.4. Setting up a Kerberos Client for Smart Cards Smart cards can be used with Kerberos, but it requires additional configuration to recognize the X.509 (SSL) user certificates on the smart cards: Install the required PKI/OpenSSL package, along with the other client packages: Edit the /etc/krb5.conf configuration file to add a parameter for the public key infrastructure (PKI) to the [realms] section of the configuration. The pkinit_anchors parameter sets the location of the CA certificate bundle file. Add the PKI module information to the PAM configuration for both smart card authentication ( /etc/pam.d/smartcard-auth ) and system authentication ( /etc/pam.d/system-auth ). The line to be added to both files is as follows: If the OpenSC module does not work as expected, use the module from the coolkey package: /usr/lib64/pkcs11/libcoolkeypk11.so . In this case, consider contacting Red Hat Technical Support or filing a Bugzilla report about the problem. | [
"yum install krb5-pkinit yum install krb5-workstation krb5-libs",
"[realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com pkinit_anchors = FILE:/usr/local/example.com.crt }",
"auth optional pam_krb5.so use_first_pass no_subsequent_prompt preauth_options=X509_user_identity=PKCS11:/usr/lib64/pkcs11/opensc-pkcs11.so"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/krb-smart-cards |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Make sure you are logged in to the Jira website. Provide feedback by clicking on this link . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. If you want to be notified about future updates, please make sure you are assigned as Reporter . Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_ha_solutions_for_sap_hana_s4hana_and_netweaver_based_sap_applications/feedback_ha-sol-hana-netweaver |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/con_making-open-source-more-inclusive |
4.2. General Properties of Fencing Devices | 4.2. General Properties of Fencing Devices Note To disable a fencing device/resource, you can set the target-role as you would for a normal resource. Note To prevent a specific node from using a fencing device, you can configure location constraints for the fencing resource. Table 4.1, "General Properties of Fencing Devices" describes the general properties you can set for fencing devices. Refer to Section 4.3, "Displaying Device-Specific Fencing Options" for information on fencing properties you can set for specific fencing devices. Note For information on more advanced fencing configuration properties, see Section 4.9, "Additional Fencing Configuration Options" Table 4.1. General Properties of Fencing Devices Field Type Default Description priority integer 0 The priority of the stonith resource. Devices are tried in order of highest priority to lowest. pcmk_host_map string A mapping of host names to ports numbers for devices that do not support host names. For example: node1:1;node2:2,3 tells the cluster to use port 1 for node1 and ports 2 and 3 for node2 pcmk_host_list string A list of machines controlled by this device (Optional unless pcmk_host_check=static-list ). pcmk_host_check string dynamic-list How to determine which machines are controlled by the device. Allowed values: dynamic-list (query the device), static-list (check the pcmk_host_list attribute), none (assume every device can fence every machine) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-genfenceprops-haar |
24.4.7. Monitoring Files and Directories with gamin | 24.4.7. Monitoring Files and Directories with gamin Starting with Red Hat Enterprise Linux 6.8, the GLib system library uses gamin for monitoring of files and directories, and detection of their modifications on NFS file systems. By default, gamin uses polling for NFS file systems instead of inotify . Changes on other file systems are monitored by the inotify monitor that is implemented in GLib directly. As a subset of the File Alteration Monitor (FAM) system, gamin re-implements the FAM specification with the inotify Linux kernel subsystem. It is a GNOME project, but without any GNOME dependencies. Both glib2 and gamin packages are installed by default. By default, gamin works without the need of any configuration and it reverts to using polling for all paths matching /mnt/* or /media/* on Linux. Users can override or extend these settings by modifying the content of one of the following configuration files: /etc/gamin/gaminrc USDHOME/.gaminrc /etc/gamin/mandatory_gaminrc The configuration file accepts only the following commands: Commands accepted by the configuration file notify To express that kernel monitoring should be used for matching paths. poll To express that polling should be used for matching paths. fsset To control what notification method is used on a filesystem type. An example of such configuration file can be seen here: The three configuration files are loaded in this order: /etc/gamin/gaminrc ~/.gaminrc /etc/gamin/mandatory_gaminrc The /etc/gamin/mandatory_gaminrc configuration file allows the system administrator to override any potentially dangerous preferences set by the user. When checking a path to guess whether polling or kernel notification should be used, gamin checks first the user-provided rules in their declaration order within the configuration file and then check the predefined rules. This way the first declaration for /mnt/local* in the example override the default one for /mnt/* . If gamin is not configured to use the poll notifications on a particular path, it decides based on the file system the path is located on. | [
"configuration for gamin Can be used to override the default behaviour. notify filepath(s) : indicate to use kernel notification poll filepath(s) : indicate to use polling instead fsset fsname method poll_limit : indicate what method of notification for the file system kernel - use the kernel for notification poll - use polling for notification none - don't use any notification the poll_limit is the number of seconds that must pass before a resource is polled again. It is optional, and if it is not present the previous value will be used or the default. notify /mnt/local* /mnt/pictures* # use kernel notification on these paths poll /temp/* # use poll notification on these paths fsset nfs poll 10 # use polling on nfs mounts and poll once every 10 seconds"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-filesystems-system_gamin |
1.9.2.2. Cluster Status Tool | 1.9.2.2. Cluster Status Tool You can access the Cluster Status Tool ( Figure 1.29, " Cluster Status Tool " ) through the Cluster Management tab in Cluster Administration GUI. Figure 1.29. Cluster Status Tool The nodes and services displayed in the Cluster Status Tool are determined by the cluster configuration file ( /etc/cluster/cluster.conf ). You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s3-admin-overview-CSO |
Chapter 86. Openshift Build Config | Chapter 86. Openshift Build Config Since Camel 2.17 Only producer is supported The OpenShift Build Config component is one of the Kubernetes Components which provides a producer to execute Openshift Build Configs operations. 86.1. Dependencies When using openshift-build-configs with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 86.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 86.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 86.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 86.3. Component Options The Openshift Build Config component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 86.4. Endpoint Options The Openshift Build Config endpoint is configured using URI syntax: with the following path and query parameters: 86.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 86.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 86.5. Message Headers The Openshift Build Config component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesBuildConfigsLabels (producer) Constant: KUBERNETES_BUILD_CONFIGS_LABELS The Openshift Config Build labels. Map CamelKubernetesBuildConfigName (producer) Constant: KUBERNETES_BUILD_CONFIG_NAME The Openshift Config Build name. String 86.6. Supported producer operation listBuildConfigs listBuildConfigsByLabels getBuildConfig 86.7. Openshift Build Configs Producer Examples listBuilds: this operation list the Build Configs on an Openshift cluster. from("direct:list"). toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs"). to("mock:result"); This operation returns a List of Builds from your Openshift cluster. listBuildsByLabels: this operation list the build configs by labels on an Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILD_CONFIGS_LABELS, labels); } }); toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels"). to("mock:result"); This operation returns a List of Build configs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 86.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"openshift-build-configs:masterUrl",
"from(\"direct:list\"). toF(\"openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILD_CONFIGS_LABELS, labels); } }); toF(\"openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels\"). to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-build-config-component-starter |
Chapter 2. Installation | Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.4 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional channel, which are listed in Section 2.1.2, "Packages from the Optional Channel" , cannot be installed from the ISO image. Note Packages that require the Optional channel cannot be installed from the ISO image. A list of packages that require enabling of the Optional channel is provided in Section 2.1.2, "Packages from the Optional Channel" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Channel Some of the Red Hat Software Collections packages require the Optional channel to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this channel, see the relevant Knowledgebase article at https://access.redhat.com/solutions/392003 . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional channel to be enabled are listed in the tables below. Note that packages from the Optional channel are unsupported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . Table 2.1. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Channel devtoolset-8-build scl-utils-build devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-gcc-plugin-devel libmpc-devel devtoolset-9-build scl-utils-build devtoolset-9-dyninst-testsuite glibc-static devtoolset-9-gcc-plugin-devel libmpc-devel devtoolset-9-gdb source-highlight httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl python27-python-debug tix python27-python-devel scl-utils-build python27-tkinter tix rh-git218-git-cvs cvsps rh-git218-git-svn perl-Git-SVN, subversion rh-git218-perl-Git-SVN subversion-perl rh-java-common-ant-apache-bsf rhino rh-java-common-batik rhino rh-maven35-xpp3-javadoc java-1.7.0-openjdk-javadoc, java-1.8.0-openjdk-javadoc, java-1.8.0-openjdk-javadoc-zip, java-11-openjdk-javadoc, java-11-openjdk-javadoc-zip rh-php72-php-pspell aspell rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python36-python-devel scl-utils-build rh-python36-python-sphinx texlive-framed, texlive-threeparttable, texlive-titlesec, texlive-wrapfig Table 2.2. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Channel devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-elfutils-devel xz-devel devtoolset-8-gcc-plugin-devel gmp-devel, mpfr-devel devtoolset-8-libatomic-devel libatomic devtoolset-8-libgccjit mpfr python27-python-devel scl-utils-build rh-mariadb102-boost-devel libicu-devel rh-mariadb102-mariadb-bench perl-GD rh-mongodb34-boost-devel libicu-devel rh-perl524-perl-devel gdbm-devel, systemtap-sdt-devel rh-python36-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.4 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.4, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections.> The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections 3.4 Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl526-perl-CPAN and rh-perl526-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby25-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide . | [
"rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms>",
"~]# yum install rh-php72 rh-mariadb102",
"~]# yum install rh-perl526-perl-CPAN rh-perl526-perl-Archive-Tar",
"~]# debuginfo-install rh-ruby25-ruby"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-installation |
5.4. Backing up ext2, ext3, or ext4 File Systems | 5.4. Backing up ext2, ext3, or ext4 File Systems This procedure describes how to back up the content of an ext4, ext3, or ext2 file system into a file. Prerequisites If the system has been running for a long time, run the e2fsck utility on the partitions before backup: Procedure 5.1. Backing up ext2, ext3, or ext4 File Systems Back up configuration information, including the content of the /etc/fstab file and the output of the fdisk -l command. This is useful for restoring the partitions. To capture this information, run the sosreport or sysreport utilities. For more information about sosreport , see the What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? Kdowledgebase article. Depending on the role of the partition: If the partition you are backing up is an operating system partition, boot your system into the rescue mode. See the Booting to Rescue Mode section of the System Administrator's Guide . When backing up a regular, data partition, unmount it. Although it is possible to back up a data partition while it is mounted, the results of backing up a mounted data partition can be unpredictable. If you need to back up a mounted file system using the dump utility, do so when the file system is not under a heavy load. The more activity is happening on the file system when backing up, the higher the risk of backup corruption is. Use the dump utility to back up the content of the partitions: Replace backup-file with a path to a file where you want the to store the backup. Replace device with the name of the ext4 partition you want to back up. Make sure that you are saving the backup to a directory mounted on a different partition than the partition you are backing up. Example 5.2. Backing up Multiple ext4 Partitions To back up the content of the /dev/sda1 , /dev/sda2 , and /dev/sda3 partitions into backup files stored in the /backup-files/ directory, use the following commands: To do a remote backup, use the ssh utility or configure a password-less ssh login. For more information on ssh and password-less login, see the Using the ssh Utility and Using Key-based Authentication sections of the System Administrator's Guide . For example, when using ssh : Example 5.3. Performing a Remote Backup Using ssh Note that if using standard redirection, you must pass the -f option separately. Additional Resources For more information, see the dump (8) man page. | [
"e2fsck /dev/ device",
"dump -0uf backup-file /dev/ device",
"dump -0uf /backup-files/sda1.dump /dev/sda1 # dump -0uf /backup-files/sda2.dump /dev/sda2 # dump -0uf /backup-files/sda3.dump /dev/sda3",
"dump -0u -f - /dev/ device | ssh root@ remoteserver.example.com dd of= backup-file"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ext4Backup |
8.114. libica | 8.114. libica 8.114.1. RHBA-2014:1497 - libica bug fix and enhancement update Updated libica packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libica library contains a set of functions and utilities for accessing the IBM eServer Cryptographic Accelerator (ICA) hardware on IBM System z. Note The libica packages have been upgraded to upstream version 2.3.0, which provides a number of bug fixes and enhancements over the version. (BZ# 1053842 ) Users of libica are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libica |
Chapter 25. cron | Chapter 25. cron This chapter describes the commands under the cron command. 25.1. cron trigger create Create new trigger. Usage: Table 25.1. Positional arguments Value Summary name Cron trigger name workflow_identifier Workflow name or id workflow_input Workflow input Table 25.2. Command arguments Value Summary -h, --help Show this help message and exit --params PARAMS Workflow params --pattern <* * * * *> Cron trigger pattern --first-time <YYYY-MM-DD HH:MM> Date and time of the first execution. time is treated as local time unless --utc is also specified --count <integer> Number of wanted executions --utc All times specified should be treated as utc Table 25.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 25.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.2. cron trigger delete Delete trigger. Usage: Table 25.7. Positional arguments Value Summary cron_trigger Name of cron trigger(s). Table 25.8. Command arguments Value Summary -h, --help Show this help message and exit 25.3. cron trigger list List all cron triggers. Usage: Table 25.9. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 25.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 25.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 25.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 25.4. cron trigger show Show specific cron trigger. Usage: Table 25.14. Positional arguments Value Summary cron_trigger Cron trigger name Table 25.15. Command arguments Value Summary -h, --help Show this help message and exit Table 25.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 25.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 25.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 25.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack cron trigger create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--params PARAMS] [--pattern <* * * * *>] [--first-time <YYYY-MM-DD HH:MM>] [--count <integer>] [--utc] name workflow_identifier [workflow_input]",
"openstack cron trigger delete [-h] cron_trigger [cron_trigger ...]",
"openstack cron trigger list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]",
"openstack cron trigger show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] cron_trigger"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/cron |
Chapter 4. Managing rules | Chapter 4. Managing rules The MTA plugin comes with a core set of System rules for analyzing projects and identifying migration and modernization issues. You can create and import custom rulesets. 4.1. Viewing rules You can view system and custom rules, if any, for the MTA plugin. Prerequisites To view system rules, the MTA server must be running. Procedure Click the Rulesets tab. Expand System to view system rulesets or Custom to view custom rulesets. Expand a ruleset. Double-click a rule to open it in a viewer. Click the Source tab to view the XML source of the rule. 4.2. Creating a custom ruleset You can create a custom ruleset in the MTA perspective. See the Rule Development Guide to learn more about creating custom XML rules. Procedure Click the Rulesets tab. Click the Create Ruleset icon ( ). Select a project and a directory for the ruleset. Enter the file name. Note The file must have the extension .windup.xml . Enter a ruleset ID, for example, my-ruleset-id . Optional: Select Generate quickstart template to add basic rule templates to the file. Click Finish . The ruleset file opens in an editor and you can add and edit rules in the file. Click the Source tab to edit the XML source of the ruleset file. You can select the new ruleset when you create a run configuration. 4.3. Importing a custom ruleset You can import a custom ruleset into the MTA plugin to analyze your projects. Prerequisites Custom ruleset file with a .windup.xml extension. See the Rule Development Guide for information about creating rulesets. Procedure Click the Rulesets tab. Click the Import Ruleset icon ( ). Browse to and select the XML rule file to import. The custom ruleset is displayed when you expand Custom on the Rulesets tab. 4.4. Submitting a custom ruleset You can submit your custom ruleset for inclusion in the official MTA rule repository. This allows your custom rules to be reviewed and included in subsequent releases of MTA. Procedure Click the Rulesets tab. Click the Arrow icon ( ) and select Submit Ruleset . Complete the following fields: Summary : Describe the purpose of the rule. This becomes the title of the submission. Code Sample : Enter an example of the source code that the rule should run against. Description : Enter a brief description of the rule. Click Choose Files and select the ruleset file. Click Submit . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/eclipse_plugin_guide/managing-rules_eclipse-code-ready-studio-guide |
Chapter 161. Ignite Queues Component | Chapter 161. Ignite Queues Component Available as of Camel version 2.17 The Ignite Queue endpoint is one of camel-ignite endpoints which allows you to interact with Ignite Queue data structures . This endpoint only supports producers. 161.1. Options The Ignite Queues component supports 4 options, which are listed below. Name Description Default Type ignite (producer) Sets the Ignite instance. Ignite configurationResource (producer) Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. Object igniteConfiguration (producer) Allows the user to set a programmatic IgniteConfiguration. IgniteConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Ignite Queues endpoint is configured using URI syntax: with the following path and query parameters: 161.1.1. Path Parameters (1 parameters): Name Description Default Type name Required The queue name. String 161.1.2. Query Parameters (7 parameters): Name Description Default Type capacity (producer) The queue capacity. Default: non-bounded. int configuration (producer) The collection configuration. Default: empty configuration. You can also conveniently set inner properties by using configuration.xyz=123 options. CollectionConfiguration operation (producer) The operation to invoke on the Ignite Queue. Superseded by the IgniteConstants.IGNITE_QUEUE_OPERATION header in the IN message. Possible values: CONTAINS, ADD, SIZE, REMOVE, ITERATOR, CLEAR, RETAIN_ALL, ARRAY, DRAIN, ELEMENT, PEEK, OFFER, POLL, TAKE, PUT. IgniteQueueOperation propagateIncomingBodyIfNo ReturnValue (producer) Sets whether to propagate the incoming body if the return type of the underlying Ignite operation is void. true boolean timeoutMillis (producer) The queue timeout in milliseconds. Default: no timeout. Long treatCollectionsAsCache Objects (producer) Sets whether to treat Collections as cache objects or as Collections of items to insert/update/compute, etc. false boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 161.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.ignite-queue.configuration-resource Sets the resource from where to load the configuration. It can be a: URI, String (URI) or an InputStream. The option is a java.lang.Object type. String camel.component.ignite-queue.enabled Enable ignite-queue component true Boolean camel.component.ignite-queue.ignite Sets the Ignite instance. The option is a org.apache.ignite.Ignite type. String camel.component.ignite-queue.ignite-configuration Allows the user to set a programmatic IgniteConfiguration. The option is a org.apache.ignite.configuration.IgniteConfiguration type. String camel.component.ignite-queue.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 161.2.1. Headers used This endpoint uses the following headers: Header name Constant Expected type Description CamelIgniteQueueOperation IgniteConstants.IGNITE_QUEUE_OPERATION IgniteQueueOperation enum Allows you to dynamically change the queue operation. CamelIgniteQueueMaxElements IgniteConstants.IGNITE_QUEUE_MAX_ELEMENTS Integer or int When invoking the DRAIN operation, the amount of items to drain. CamelIgniteQueueTransferredCount IgniteConstants.IGNITE_QUEUE_TRANSFERRED_COUNT Integer or int The amount of items transferred as the result of the DRAIN operation. CamelIgniteQueueTimeoutMillis IgniteConstants.IGNITE_QUEUE_TIMEOUT_MILLIS Long or long Dynamically sets the timeout in milliseconds to use when invoking the OFFER or POLL operations. | [
"ignite-queue:name"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ignite-queue-component |
Chapter 2. Updating RPMs on an OSTree system | Chapter 2. Updating RPMs on an OSTree system Updating MicroShift on an rpm-ostree system such as Red Hat Enterprise Linux for Edge (RHEL for Edge) requires building a new RHEL for Edge image containing the new version of MicroShift and any associated optional RPMs. After you have the rpm-ostree image with MicroShift embedded, direct your system to boot into that operating system image. The procedures are the same for minor-version and patch updates. For example, use the same steps to upgrade from 4.17 to 4.18 or from 4.18.2 to 4.18.3. Note Downgrades other than automatic rollbacks are not supported. The following procedure is for updates only. 2.1. Applying updates on an rpm-ostree system To update MicroShift on an rpm-ostree system such as Red Hat Enterprise Linux for Edge (RHEL for Edge), embed the new version of MicroShift on a new operating system image. Back up and system rollback are automatic with this update type. You can also use this workflow to update applications running in the MicroShift cluster. Ensure compatibility between the application and the adjacent versions of MicroShift and RHEL for Edge before starting an update. Important The steps you use depends on how your existing deployment is set up. The following procedure outlines the general steps you can take, with links to the RHEL for Edge documentation. The RHEL for Edge documentation is your resource for specific details on building an updated operating system image. Prerequisites The system requirements for installing MicroShift have been met. You have root user access to the host. The version of MicroShift you have is compatible with the RHEL for Edge image you are preparing to use. Important You cannot downgrade MicroShift with this process. Downgrades other than automatic rollbacks are not supported. Procedure Create an image builder configuration file for adding the rhocp-4.18 RPM repository source required to pull MicroShift RPMs by running the following command: USD cat > rhocp-4.18.toml <<EOF id = "rhocp-4.18" name = "Red Hat OpenShift Container Platform 4.18 for RHEL 9" type = "yum-baseurl" url = "https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/rhocp/4.18/os" check_gpg = true check_ssl = true system = false rhsm = true EOF Add the update RPM source to the image builder by running the following command: USD sudo composer-cli sources add rhocp-4.18.toml Build a new image of RHEL for Edge that contains the new version of MicroShift. To determine the steps required, use the following documentation: Building a RHEL for Edge commit update Update the host to use the new image of RHEL for Edge. To determine the steps required, use the following documentation: Deploying RHEL for Edge image updates Reboot the host to apply updates by running the following command: USD sudo systemctl reboot | [
"cat > rhocp-4.18.toml <<EOF id = \"rhocp-4.18\" name = \"Red Hat OpenShift Container Platform 4.18 for RHEL 9\" type = \"yum-baseurl\" url = \"https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/rhocp/4.18/os\" check_gpg = true check_ssl = true system = false rhsm = true EOF",
"sudo composer-cli sources add rhocp-4.18.toml",
"sudo systemctl reboot"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/updating/microshift-update-rpms-ostree |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.