title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 16. Performing rolling upgrades for Data Grid Server clusters
Chapter 16. Performing rolling upgrades for Data Grid Server clusters Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss and migrate data over the Hot Rod protocol. 16.1. Setting up target Data Grid clusters Create a cluster that uses the Data Grid version to which you plan to upgrade and then connect the source cluster to the target cluster using a remote cache store. Prerequisites Install Data Grid Server nodes with the desired version for your target cluster. Important Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and port offsets to separate the target and source clusters. Procedure Create a remote cache store configuration, in JSON format, that allows the target cluster to connect to the source cluster. Remote cache stores on the target cluster use the Hot Rod protocol to retrieve data from the source cluster. { "remote-store": { "cache": "myCache", "shared": true, "raw-values": true, "security": { "authentication": { "digest": { "username": "username", "password": "changeme", "realm": "default" } } }, "remote-server": [ { "host": "127.0.0.1", "port": 12222 } ] } } Use the Data Grid Command Line Interface (CLI) or REST API to add the remote cache store configuration to the target cluster so it can connect to the source cluster. CLI: Use the migrate cluster connect command on the target cluster. REST API: Invoke a POST request that includes the remote store configuration in the payload with the rolling-upgrade/source-connection method. Repeat the preceding step for each cache that you want to migrate. Switch clients over to the target cluster, so it starts handling all requests. Update client configuration with the location of the target cluster. Restart clients. Important If you need to migrate Indexed caches you must first migrate the internal ___protobuf_metadata cache so that the .proto schemas defined on the source cluster will also be present on the target cluster. Additional resources Remote cache store configuration schema 16.2. Synchronizing data to target clusters When you set up a target Data Grid cluster and connect it to a source cluster, the target cluster can handle client requests using a remote cache store and load data on demand. To completely migrate data to the target cluster, so you can decommission the source cluster, you can synchronize data. This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache that you want to migrate to the target cluster. Prerequisites Set up a target cluster with the appropriate Data Grid version. Procedure Start synchronizing each cache that you want to migrate to the target cluster with the Data Grid Command Line Interface (CLI) or REST API. CLI: Use the migrate cluster synchronize command. REST API: Use the ?action=sync-data parameter with a POST request. When the operation completes, Data Grid responds with the total number of entries copied to the target cluster. Disconnect each node in the target cluster from the source cluster. CLI: Use the migrate cluster disconnect command. REST API: Invoke a DELETE request. steps After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
[ "{ \"remote-store\": { \"cache\": \"myCache\", \"shared\": true, \"raw-values\": true, \"security\": { \"authentication\": { \"digest\": { \"username\": \"username\", \"password\": \"changeme\", \"realm\": \"default\" } } }, \"remote-server\": [ { \"host\": \"127.0.0.1\", \"port\": 12222 } ] } }", "[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json", "POST /rest/v2/caches/myCache/rolling-upgrade/source-connection", "migrate cluster synchronize -c myCache", "POST /rest/v2/caches/myCache?action=sync-data", "migrate cluster disconnect -c myCache", "DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/rolling-upgrades
Chapter 15. Administering the Self-Hosted Engine
Chapter 15. Administering the Self-Hosted Engine 15.1. Maintaining the Self-Hosted Engine Self-hosted Engine Maintenance Modes The maintenance modes enable you to start, stop, and modify the Manager virtual machine without interference from the high-availability agents, and to restart and modify the self-hosted engine nodes in the environment without interfering with the Manager. There are three maintenance modes that can be enforced: global - All high-availability agents in the cluster are disabled from monitoring the state of the Manager virtual machine. The global maintenance mode must be applied for any setup or upgrade operations that require the ovirt-engine service to be stopped, such as upgrading to a later version of Red Hat Virtualization. local - The high-availability agent on the node issuing the command is disabled from monitoring the state of the Manager virtual machine. The node is exempt from hosting the Manager virtual machine while in local maintenance mode; if hosting the Manager virtual machine when placed into this mode, the Manager will migrate to another node, provided there is one available. The local maintenance mode is recommended when applying system changes or updates to a self-hosted engine node. none - Disables maintenance mode, ensuring that the high-availability agents are operating. Setting Local Maintenance Stop the high-availability agent on a single self-hosted engine node. Setting the local maintenance mode from the Administration Portal Put a self-hosted engine node into local maintenance mode: In the Administration Portal, click Compute Hosts and select a self-hosted engine node. Click Management Maintenance . Local maintenance mode is automatically triggered for that node. After you have completed any maintenance tasks, disable the maintenance mode: In the Administration Portal, click Compute Hosts and select the self-hosted engine node. Click Management Activate . Setting the local maintenance mode from the command line Log in to a self-hosted engine node and put it into local maintenance mode: After you have completed any maintenance tasks, disable the maintenance mode: Setting Global Maintenance Stop the high-availability agents on all self-hosted engine nodes in the cluster. Setting the global maintenance mode from the Administration Portal Put all of the self-hosted engine nodes into global maintenance mode: In the Administration Portal, click Compute Hosts and select any self-hosted engine node. Click More Actions ( ), then click Enable Global HA Maintenance . After you have completed any maintenance tasks, disable the maintenance mode: In the Administration Portal, click Compute Hosts and select any self-hosted engine node. Click More Actions ( ), then click Disable Global HA Maintenance . Setting the global maintenance mode from the command line Log in to any self-hosted engine node and put it into global maintenance mode: After you have completed any maintenance tasks, disable the maintenance mode:
[ "hosted-engine --set-maintenance --mode=local", "hosted-engine --set-maintenance --mode=none", "hosted-engine --set-maintenance --mode=global", "hosted-engine --set-maintenance --mode=none" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-Administering_the_Self-Hosted_Engine
Chapter 2. AppliedClusterResourceQuota [quota.openshift.io/v1]
Chapter 2. AppliedClusterResourceQuota [quota.openshift.io/v1] Description AppliedClusterResourceQuota mirrors ClusterResourceQuota at a project scope, for projection into a project. It allows a project-admin to know which ClusterResourceQuotas are applied to his project and their associated usage. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ClusterResourceQuotaSpec defines the desired quota restrictions status object ClusterResourceQuotaStatus defines the actual enforced quota and its current usage 2.1.1. .spec Description ClusterResourceQuotaSpec defines the desired quota restrictions Type object Required selector quota Property Type Description quota ResourceQuotaSpec Quota defines the desired quota selector object ClusterResourceQuotaSelector is used to select projects. At least one of LabelSelector or AnnotationSelector must present. If only one is present, it is the only selection criteria. If both are specified, the project must match both restrictions. 2.1.2. .spec.selector Description ClusterResourceQuotaSelector is used to select projects. At least one of LabelSelector or AnnotationSelector must present. If only one is present, it is the only selection criteria. If both are specified, the project must match both restrictions. Type object Property Type Description annotations object (string) AnnotationSelector is used to select projects by annotation. labels LabelSelector LabelSelector is used to select projects by label. 2.1.3. .status Description ClusterResourceQuotaStatus defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces array Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. namespaces[] object ResourceQuotaStatusByNamespace gives status for a particular project total ResourceQuotaStatus Total defines the actual enforced quota and its current usage across all projects 2.1.4. .status.namespaces Description Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. Type array 2.1.5. .status.namespaces[] Description ResourceQuotaStatusByNamespace gives status for a particular project Type object Required namespace status Property Type Description namespace string Namespace the project this status applies to status ResourceQuotaStatus Status indicates how many resources have been consumed by this project 2.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/appliedclusterresourcequotas GET : list objects of kind AppliedClusterResourceQuota /apis/quota.openshift.io/v1/namespaces/{namespace}/appliedclusterresourcequotas GET : list objects of kind AppliedClusterResourceQuota /apis/quota.openshift.io/v1/namespaces/{namespace}/appliedclusterresourcequotas/{name} GET : read the specified AppliedClusterResourceQuota 2.2.1. /apis/quota.openshift.io/v1/appliedclusterresourcequotas HTTP method GET Description list objects of kind AppliedClusterResourceQuota Table 2.1. HTTP responses HTTP code Reponse body 200 - OK AppliedClusterResourceQuotaList schema 401 - Unauthorized Empty 2.2.2. /apis/quota.openshift.io/v1/namespaces/{namespace}/appliedclusterresourcequotas HTTP method GET Description list objects of kind AppliedClusterResourceQuota Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AppliedClusterResourceQuotaList schema 401 - Unauthorized Empty 2.2.3. /apis/quota.openshift.io/v1/namespaces/{namespace}/appliedclusterresourcequotas/{name} Table 2.3. Global path parameters Parameter Type Description name string name of the AppliedClusterResourceQuota HTTP method GET Description read the specified AppliedClusterResourceQuota Table 2.4. HTTP responses HTTP code Reponse body 200 - OK AppliedClusterResourceQuota schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/appliedclusterresourcequota-quota-openshift-io-v1
API reference
API reference Red Hat build of MicroShift 4.18 MicroShift API reference Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/index
Chapter 4. Advanced topics
Chapter 4. Advanced topics This section covers topics that are beyond the scope of the introductory tutorial but are useful in real-world RPM packaging. 4.1. Signing packages Packages are signed to make sure no third party can alter their content. A user can add an additional layer of security by using the HTTPS protocol when downloading the package. There are three ways to sign a package: 4.1.1. Creating a GPG key Procedure Generate a GNU Privacy Guard (GPG) key pair: Confirm and see the generated key: Export the public key: Note Include the real name that you have selected for the key instead of <Key_name>. Import the exported public key into an RPM database: 4.1.2. Adding a signature to an already existing package This section describes the most usual case when a package is built without a signature. The signature is added just before the release of the package. To add a signature to a package, use the --addsign option provided by the rpm-sign package. Having more than one signature enables to record the package's path of ownership from the package builder to the end-user. Procedure Add a signature to a package: Note You are supposed to enter the password to unlock the secret key for the signature. 4.1.3. Checking the signatures of a package with multiple signatures Procedure To check the signatures of a package with multiple signatures, run the following: The two pgp strings in the output of the rpm --checksig command show that the package has been signed twice. 4.1.4. A practical example of adding a signature to an already existing package This section describes an example situation where adding a signature to an already existing package might be useful. A division of a company creates a package and signs it with the division's key. The company's headquarters then checks the package's signature and adds the corporate signature to the package, stating that the signed package is authentic. With two signatures, the package makes its way to a retailer. The retailer checks the signatures and, if they match, adds their signature as well. The package now makes its way to a company that wants to deploy the package. After checking every signature on the package, they know that it is an authentic copy. Depending on the deploying company's internal controls, they may choose to add their own signature, to inform their employees that the package has received their corporate approval 4.1.5. Replacing the signature on an already existing package This procedure describes how to change the public key without having to rebuild each package. Procedure To change the public key, run the following: Note You are supposed to enter the password to unlock the secret key for the signature. The --resign option also enables you to change the public key for multiple packages, as shown in the following procedure. Procedure To change the public key for multiple packages, execute: Note You are supposed to enter the password to unlock the secret key for the signature. 4.1.6. Signing a package at build-time Procedure Build the package with the rpmbuild command: Sign the package with the rpmsign command using the --addsign option: Optionally, verify the signature of a package: Note When building and signing multiple packages, use the following syntax to avoid entering the Pretty Good Privacy (PGP) passphrase multiple times. Note that you are supposed to enter the password to unlock the secret key for the signature. 4.2. More on macros This section covers selected built-in RPM Macros. For an exhaustive list of such macros, see RPM Documentation . 4.2.1. Defining your own macros The following section describes how to create a custom macro. Procedure Include the following line in the RPM SPEC file: All whitespace surrounding \ is removed. Name may be composed of alphanumeric characters, and the character _ and must be at least 3 characters in length. Inclusion of the (opts) field is optional: Simple macros do not contain the (opts) field. In this case, only recursive macro expansion is performed. Parametrized macros contain the (opts) field. The opts string between parentheses is passed to getopt(3) for argc/argv processing at the beginning of a macro invocation. Note Older RPM SPEC files use the %define <name> <body> macro pattern instead. The differences between %define and %global macros are as follows: %define has local scope. It applies to a specific part of a SPEC file. The body of a %define macro is expanded when used. %global has global scope. It applies to an entire SPEC file. The body of a %global macro is expanded at definition time. Important Macros are evaluated even if they are commented out or the name of the macro is given into the %changelog section of the SPEC file. To comment out a macro, use %% . For example: %%global . Additional resources For comprehensive information on macros capabilities, see RPM Documentation . 4.2.2. Using the %setup macro This section describes how to build packages with source code tarballs using different variants of the %setup macro. Note that the macro variants can be combined The rpmbuild output illustrates standard behavior of the %setup macro. At the beginning of each phase, the macro outputs Executing(%... ) , as shown in the below example. Example 4.1. Example %setup macro output The shell output is set with set -x enabled. To see the content of /var/tmp/rpm-tmp.DhddsG , use the --debug option because rpmbuild deletes temporary files after a successful build. This displays the setup of environment variables followed by for example: The %setup macro: Ensures that we are working in the correct directory. Removes residues of builds. Unpacks the source tarball. Sets up some default privileges. 4.2.2.1. Using the %setup -q macro The -q option limits the verbosity of the %setup macro. Only tar -xof is executed instead of tar -xvvof . Use this option as the first option. 4.2.2.2. Using the %setup -n macro The -n option is used to specify the name of the directory from expanded tarball. This is used in cases when the directory from expanded tarball has a different name from what is expected ( %{name}-%{version} ), which can lead to an error of the %setup macro. For example, if the package name is cello , but the source code is archived in hello-1.0.tgz and contains the hello/ directory, the SPEC file content needs to be as follows: 4.2.2.3. Using the %setup -c macro The -c option is used if the source code tarball does not contain any subdirectories and after unpacking, files from an archive fills the current directory. The -c option then creates the directory and steps into the archive expansion as shown below: The directory is not changed after archive expansion. 4.2.2.4. Using the %setup -D and %setup -T macros The -D option disables deleting of source code directory, and is particularly useful if the %setup macro is used several times. With the -D option, the following lines are not used: The -T option disables expansion of the source code tarball by removing the following line from the script: 4.2.2.5. Using the %setup -a and %setup -b macros The -a and -b options expand specific sources: The -b option stands for before , and it expands specific sources before entering the working directory. The -a option stands for after , and it expands those sources after entering. Their arguments are source numbers from the SPEC file preamble. In the following example, the cello-1.0.tar.gz archive contains an empty examples directory. The examples are shipped in a separate examples.tar.gz tarball and they expand into the directory of the same name. In this case, use -a 1 , if you want to expand Source1 after entering the working directory: In the following example, examples are provided in a separate cello-1.0-examples.tar.gz tarball, which expands into cello-1.0/examples . In this case, use -b 1 , to expand Source1 before entering the working directory: 4.2.3. Common RPM macros in the %files section This section lists advanced RPM Macros that are needed in the %files section of a SPEC file. Table 4.1. Advanced RPM Macros in the %files section Macro Definition %license The macro identifies the file listed as a LICENSE file and it will be installed and labeled as such by RPM. Example: %license LICENSE %doc The macro identifies a file listed as documentation and it will be installed and labeled as such by RPM. The macro is used for documentation about the packaged software and also for code examples and various accompanying items. In the event code examples are included, care should be taken to remove executable mode from the file. Example: %doc README %dir The macro ensures that the path is a directory owned by this RPM. This is important so that the RPM file manifest accurately knows what directories to clean up on uninstall. Example: %dir %{_libdir}/%{name} %config(noreplace) The macro ensures that the following file is a configuration file and therefore should not be overwritten (or replaced) on a package install or update if the file has been modified from the original installation checksum. If there is a change, the file will be created with .rpmnew appended to the end of the filename upon upgrade or install so that the pre-existing or modified file on the target system is not modified. Example: %config(noreplace) %{_sysconfdir}/%{name}/%{name}.conf 4.2.4. Displaying the built-in macros provides multiple built-in RPM macros. Procedure To display all built-in RPM macros, run: Note The output is quite sizeable. To narrow the result, use the command above with the grep command. To find information about the RPMs macros for your system's version of RPM, run: Note RPM macros are the files titled macros in the output directory structure. 4.2.5. RPM distribution macros Different distributions provide different sets of recommended RPM macros based on the language implementation of the software being packaged or the specific guidelines of the distribution. The sets of recommended RPM macros are often provided as RPM packages, ready to be installed with the yum package manager. Once installed, the macro files can be found in the /usr/lib/rpm/macros.d/ directory. To display the raw RPM macro definitions, run: The above output displays the raw RPM macro definitions. To determine what a macro does and how it can be helpful when packaging RPMs, run the rpm --eval command with the name of the macro used as its argument: For more information, see the rpm man page. 4.2.5.1. Creating custom macros You can override the distribution macros in the ~/.rpmmacros file with your custom macros. Any changes that you make affect every build on your machine. Warning Defining any new macros in the ~/.rpmmacros file is not recommended. Such macros would not be present on other machines, where users may want to try to rebuild your package. To override a macro, run : You can create the directory from the example above, including all subdirectories through the rpmdev-setuptree utility. The value of this macro is by default ~/rpmbuild . The macro above is often used to pass to Makefile, for example make %{?_smp_mflags} , and to set a number of concurrent processes during the build phase. By default, it is set to -jX , where X is a number of cores. If you alter the number of cores, you can speed up or slow down a build of packages. 4.3. Epoch, Scriptlets and Triggers This section covers Epoch , Scriptlets , and Triggers , which represent advanced directives for RMP SPEC files. All these directives influence not only the SPEC file, but also the end machine on which the resulting RPM is installed. 4.3.1. The Epoch directive The Epoch directive enables to define weighted dependencies based on version numbers. If this directive is not listed in the RPM SPEC file, the Epoch directive is not set at all. This is contrary to common belief that not setting Epoch results in an Epoch of 0. However, the YUM utility treats an unset Epoch as the same as an Epoch of 0 for the purposes of depsolving. However, listing Epoch in a SPEC file is usually omitted because in majority of cases introducing an Epoch value skews the expected RPM behavior when comparing versions of packages. Example 4.2. Using Epoch If you install the foobar package with Epoch: 1 and Version: 1.0 , and someone else packages foobar with Version: 2.0 but without the Epoch directive, the new version will never be considered an update. The reason being that the Epoch version is preferred over the traditional Name-Version-Release marker that signifies versioning for RPM Packages. Using of Epoch is thus quite rare. However, Epoch is typically used to resolve an upgrade ordering issue. The issue can appear as a side effect of upstream change in software version number schemes or versions incorporating alphabetical characters that cannot always be compared reliably based on encoding. 4.3.2. Scriptlets Scriptlets are a series of RPM directives that are executed before or after packages are installed or deleted. Use Scriptlets only for tasks that cannot be done at build time or in an start up script. 4.3.2.1. Scriptlets directives A set of common Scriptlet directives exists. They are similar to the SPEC file section headers, such as %build or %install . They are defined by multi-line segments of code, which are often written as a standard POSIX shell script. However, they can also be written in other programming languages that RPM for the target machine's distribution accepts. RPM Documentation includes an exhaustive list of available languages. The following table includes Scriptlet directives listed in their execution order. Note that a package containing the scripts is installed between the %pre and %post directive, and it is uninstalled between the %preun and %postun directive. Table 4.2. Scriptlet directives Directive Definition %pretrans Scriptlet that is executed just before installing or removing any package. %pre Scriptlet that is executed just before installing the package on the target system. %post Scriptlet that is executed just after the package was installed on the target system. %preun Scriptlet that is executed just before uninstalling the package from the target system. %postun Scriptlet that is executed just after the package was uninstalled from the target system. %posttrans Scriptlet that is executed at the end of the transaction. 4.3.2.2. Turning off a scriptlet execution To turn off the execution of any scriptlet, use the rpm command together with the --no_scriptlet_name_ option. Procedure For example, to turn off the execution of the %pretrans scriptlets, run: You can also use the -- noscripts option, which is equivalent to all of the following: --nopre --nopost --nopreun --nopostun --nopretrans --noposttrans Additional resources For more details, see the rpm(8) man page. 4.3.2.3. Scriptlets macros The Scriptlets directives also work with RPM macros. The following example shows the use of systemd scriptlet macro, which ensures that systemd is notified about a new unit file. 4.3.3. The Triggers directives Triggers are RPM directives which provide a method for interaction during package installation and uninstallation. Warning Triggers may be executed at an unexpected time, for example on update of the containing package. Triggers are difficult to debug, therefore they need to be implemented in a robust way so that they do not break anything when executed unexpectedly. For these reasons, {RH} recommends to minimize the use of Triggers . The order of execution and the details for each existing Triggers are listed below: The above items are found in the /usr/share/doc/rpm-4.*/triggers file. 4.3.4. Using non-shell scripts in a SPEC file The -p scriptlet option in a SPEC file enables the user to invoke a specific interpreter instead of the default shell scripts interpreter ( -p /bin/sh ). The following procedure describes how to create a script, which prints out a message after installation of the pello.py program: Procedure Open the pello.spec file. Find the following line: Under the above line, insert: Install your package: Check the output message after the installation: Note To use a Python 3 script, include the following line under install -m in a SPEC file: To use a Lua script, include the following line under install -m in a SPEC file: This way, you can specify any interpreter in a SPEC file. 4.4. RPM conditionals RPM Conditionals enable conditional inclusion of various sections of the SPEC file. Conditional inclusions usually deal with: Architecture-specific sections Operating system-specific sections Compatibility issues between various versions of operating systems Existence and definition of macros 4.4.1. RPM conditionals syntax RPM conditionals use the following syntax: If expression is true, then do some action: If expression is true, then do some action, in other case, do another action: 4.4.2. RPM conditionals examples This section provides multiple examples of RPM conditionals. 4.4.2.1. The %if conditionals Example 4.3. Using the %if conditional to handle compatibility between 8 and other operating systems This conditional handles compatibility between RHEL 8 and other operating systems in terms of support of the AS_FUNCTION_DESCRIBE macro. If the package is built for RHEL, the %rhel macro is defined, and it is expanded to RHEL version. If its value is 8, meaning the package is build for RHEL 8, then the references to AS_FUNCTION_DESCRIBE, which is not supported by RHEL 8, are deleted from autoconfig scripts. Example 4.4. Using the %if conditional to handle definition of macros This conditional handles definition of macros. If the %milestone or the %revision macros are set, the %ruby_archive macro, which defines the name of the upstream tarball, is redefined. 4.4.2.2. Specialized variants of %if conditionals The %ifarch conditional, %ifnarch conditional and %ifos conditional are specialized variants of the %if conditionals. These variants are commonly used, hence they have their own macros. 4.4.2.2.1. The %ifarch conditional The %ifarch conditional is used to begin a block of the SPEC file that is architecture-specific. It is followed by one or more architecture specifiers, each separated by commas or whitespace. Example 4.5. An example use of the %ifarch conditional All the contents of the SPEC file between %ifarch and %endif are processed only on the 32-bit AMD and Intel architectures or Sun SPARC-based systems. 4.4.2.2.2. The %ifnarch conditional The %ifnarch conditional has a reverse logic than %ifarch conditional. Example 4.6. An example use of the %ifnarch conditional All the contents of the SPEC file between %ifnarch and %endif are processed only if not done on a Digital Alpha/AXP-based system. 4.4.2.2.3. The %ifos conditional The %ifos conditional is used to control processing based on the operating system of the build. It can be followed by one or more operating system names. Example 4.7. An example use of the %ifos conditional All the contents of the SPEC file between %ifos and %endif are processed only if the build was done on a Linux system.
[ "# gpg --gen-key", "# gpg --list-keys", "# gpg --export -a '<Key_name>' > RPM-GPG-KEY-pmanager", "# rpm --import RPM-GPG-KEY-pmanager", "rpm --addsign blather-7.9-1.x86_64.rpm", "rpm --checksig blather-7.9-1.x86_64.rpm blather-7.9-1.x86_64.rpm: size pgp pgp md5 OK", "rpm --resign blather-7.9-1.x86_64.rpm", "rpm --resign b*.rpm", "rpmbuild blather-7.9.spec", "rpmsign --addsign blather-7.9-1.x86_64.rpm", "rpm --checksig blather-7.9-1.x86_64.rpm blather-7.9-1.x86_64.rpm: size pgp md5 OK", "rpmbuild -ba --sign b*.spec", "%global <name>[(opts)] <body>", "Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.DhddsG", "cd '/builddir/build/BUILD' rm -rf 'cello-1.0' /usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xof - STATUS=USD? if [ USDSTATUS -ne 0 ]; then exit USDSTATUS fi cd 'cello-1.0' /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .", "Name: cello Source0: https://example.com/%{name}/release/hello-%{version}.tar.gz ... %prep %setup -n hello", "/usr/bin/mkdir -p cello-1.0 cd 'cello-1.0'", "rm -rf 'cello-1.0'", "/usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xvvof -", "Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: examples.tar.gz ... %prep %setup -a 1", "Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: %{name}-%{version}-examples.tar.gz ... %prep %setup -b 1", "--showrc", "-ql rpm", "--showrc", "--eval %{_MACRO}", "%_topdir /opt/some/working/directory/rpmbuild", "%_smp_mflags -l3", "rpm --nopretrans", "rpm --showrc | grep systemd -14: transaction_systemd_inhibit %{ plugindir}/systemd_inhibit.so -14: _journalcatalogdir /usr/lib/systemd/catalog -14: _presetdir /usr/lib/systemd/system-preset -14: _unitdir /usr/lib/systemd/system -14: _userunitdir /usr/lib/systemd/user /usr/lib/systemd/systemd-binfmt %{? } >/dev/null 2>&1 || : /usr/lib/systemd/systemd-sysctl %{? } >/dev/null 2>&1 || : -14: systemd_post -14: systemd_postun -14: systemd_postun_with_restart -14: systemd_preun -14: systemd_requires Requires(post): systemd Requires(preun): systemd Requires(postun): systemd -14: systemd_user_post %systemd_post --user --global %{? } -14: systemd_user_postun %{nil} -14: systemd_user_postun_with_restart %{nil} -14: systemd_user_preun systemd-sysusers %{? } >/dev/null 2>&1 || : echo %{? } | systemd-sysusers - >/dev/null 2>&1 || : systemd-tmpfiles --create %{? } >/dev/null 2>&1 || : rpm --eval %{systemd_post} if [ USD1 -eq 1 ] ; then # Initial installation systemctl preset >/dev/null 2>&1 || : fi rpm --eval %{systemd_postun} systemctl daemon-reload >/dev/null 2>&1 || : rpm --eval %{systemd_preun} if [ USD1 -eq 0 ] ; then # Package removal, not upgrade systemctl --no-reload disable > /dev/null 2>&1 || : systemctl stop > /dev/null 2>&1 || : fi", "all-%pretrans ... any-%triggerprein (%triggerprein from other packages set off by new install) new-%triggerprein new-%pre for new version of package being installed ... (all new files are installed) new-%post for new version of package being installed any-%triggerin (%triggerin from other packages set off by new install) new-%triggerin old-%triggerun any-%triggerun (%triggerun from other packages set off by old uninstall) old-%preun for old version of package being removed ... (all old files are removed) old-%postun for old version of package being removed old-%triggerpostun any-%triggerpostun (%triggerpostun from other packages set off by old un install) ... all-%posttrans", "install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/", "%post -p /usr/bin/python3 print(\"This is {} code\".format(\"python\"))", "yum install /home/<username>/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm", "Installing : pello-0.1.2-1.el8.noarch 1/1 Running scriptlet: pello-0.1.2-1.el8.noarch 1/1 This is python code", "%post -p /usr/bin/python3", "%post -p <lua>", "%if expression ... %endif", "%if expression ... %else ... %endif", "%if 0%{?rhel} == 8 sed -i '/AS_FUNCTION_DESCRIBE/ s/^/ /' configure.in sed -i '/AS_FUNCTION_DESCRIBE/ s/^/ /' acinclude.m4 %endif", "%define ruby_archive %{name}-%{ruby_version} %if 0%{?milestone:1}%{?revision:1} != 0 %define ruby_archive %{ruby_archive}-%{?milestone}%{?!milestone:%{?revision:r%{revision}}} %endif", "%ifarch i386 sparc ... %endif", "%ifnarch alpha ... %endif", "%ifos linux ... %endif" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/rpm_packaging_guide/advanced-topics
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest You can use I/O scheduling on a Red Hat Enterprise Linux guest virtual machine, regardless of the hypervisor on which the guest is running. The following is a list of benefits and issues that should be considered: Red Hat Enterprise Linux guests often benefit greatly from using the noop scheduler. The scheduler merges small requests from the guest operating system into larger requests before sending the I/O to the hypervisor. This enables the hypervisor to process the I/O requests more efficiently, which can significantly improve the guest's I/O performance. Depending on the workload I/O and how storage devices are attached, schedulers like deadline can be more beneficial than noop . Red Hat recommends performance testing to verify which scheduler offers the best performance impact. Guests that use storage accessed by iSCSI, SR-IOV, or physical device passthrough should not use the noop scheduler. These methods do not allow the host to optimize I/O requests to the underlying physical device. Note In virtualized environments, it is sometimes not beneficial to schedule I/O on both the host and guest layers. If multiple guests use storage on a file system or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests. In addition, the host knows the physical layout of storage, which may not map linearly to the guests' virtual storage. All scheduler tuning should be tested under normal operating conditions, as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments. 6.2.1. Configuring the I/O Scheduler for Red Hat Enterprise Linux 7 The default scheduler used on a Red Hat Enterprise Linux 7 system is deadline . However, on a Red Hat Enterprise Linux 7 guest machine, it may be beneficial to change the scheduler to noop , by doing the following: In the /etc/default/grub file, change the elevator=deadline string on the GRUB_CMDLINE_LINUX line to elevator=noop . If there is no elevator= string, add elevator=noop at the end of the line. The following shows the /etc/default/grub file after a successful change. Rebuild the /boot/grub2/grub.cfg file. On a BIOS-based machine: On an UEFI-based machine:
[ "cat /etc/default/grub [...] GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=vg00/lvroot rhgb quiet elevator=noop\" [...]", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-io-scheduler-guest
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_vmware_vsphere/providing-feedback-on-red-hat-documentation_rhodf
Chapter 2. BuildConfig [build.openshift.io/v1]
Chapter 2. BuildConfig [build.openshift.io/v1] Description Build configurations define a build process for new container images. There are three types of builds possible - a container image build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary container images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the container image registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created. Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BuildConfigSpec describes when and how builds are created status object BuildConfigStatus contains current state of the build config object. 2.1.1. .spec Description BuildConfigSpec describes when and how builds are created Type object Required strategy Property Type Description completionDeadlineSeconds integer completionDeadlineSeconds is an optional duration in seconds, counted from the time when a build pod gets scheduled in the system, that the build may be active on a node before the system actively tries to terminate the build; value must be positive integer failedBuildsHistoryLimit integer failedBuildsHistoryLimit is the number of old failed builds to retain. When a BuildConfig is created, the 5 most recent failed builds are retained unless this value is set. If removed after the BuildConfig has been created, all failed builds are retained. mountTrustedCA boolean mountTrustedCA bind mounts the cluster's trusted certificate authorities, as defined in the cluster's proxy configuration, into the build. This lets processes within a build trust components signed by custom PKI certificate authorities, such as private artifact repositories and HTTPS proxies. When this field is set to true, the contents of /etc/pki/ca-trust within the build are managed by the build container, and any changes to this directory or its subdirectories (for example - within a Dockerfile RUN instruction) are not persisted in the build's output image. nodeSelector object (string) nodeSelector is a selector which must be true for the build pod to fit on a node If nil, it can be overridden by default build nodeselector values for the cluster. If set to an empty map or a map with any values, default build nodeselector values are ignored. output object BuildOutput is input to a build strategy and describes the container image that the strategy should produce. postCommit object A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . 1. Shell script: "postCommit": { "script": "rake test --verbose", } The above is a convenient form which is equivalent to: "postCommit": { "command": ["/bin/sh", "-ic"], "args": ["rake test --verbose"] } 2. A command as the image entrypoint: "postCommit": { "commit": ["rake", "test", "--verbose"] } Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint . 3. Pass arguments to the default entrypoint: "postCommit": { "args": ["rake", "test", "--verbose"] } This form is only useful if the image entrypoint can handle arguments. 4. Shell script with arguments: "postCommit": { "script": "rake test USD1", "args": ["--verbose"] } This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be "/bin/sh" and USD1, USD2, etc, are the positional arguments from Args. 5. Command with arguments: "postCommit": { "command": ["rake", "test"], "args": ["--verbose"] } This form is equivalent to appending the arguments to the Command slice. It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. resources ResourceRequirements resources computes resource requirements to execute the build. revision object SourceRevision is the revision or commit information from the source for the build runPolicy string RunPolicy describes how the new build created from this build configuration will be scheduled for execution. This is optional, if not specified we default to "Serial". serviceAccount string serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount source object BuildSource is the SCM used for the build. strategy object BuildStrategy contains the details of how to perform a build. successfulBuildsHistoryLimit integer successfulBuildsHistoryLimit is the number of old successful builds to retain. When a BuildConfig is created, the 5 most recent successful builds are retained unless this value is set. If removed after the BuildConfig has been created, all successful builds are retained. triggers array triggers determine how new Builds can be launched from a BuildConfig. If no triggers are defined, a new build can only occur as a result of an explicit client build creation. triggers[] object BuildTriggerPolicy describes a policy for a single trigger that results in a new Build. 2.1.2. .spec.output Description BuildOutput is input to a build strategy and describes the container image that the strategy should produce. Type object Property Type Description imageLabels array imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. imageLabels[] object ImageLabel represents a label applied to the resulting image. pushSecret LocalObjectReference PushSecret is the name of a Secret that would be used for setting up the authentication for executing the Docker push to authentication enabled Docker Registry (or Docker Hub). to ObjectReference to defines an optional location to push the output of this build to. Kind must be one of 'ImageStreamTag' or 'DockerImage'. This value will be used to look up a container image repository to push to. In the case of an ImageStreamTag, the ImageStreamTag will be looked for in the namespace of the build unless Namespace is specified. 2.1.3. .spec.output.imageLabels Description imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. Type array 2.1.4. .spec.output.imageLabels[] Description ImageLabel represents a label applied to the resulting image. Type object Required name Property Type Description name string name defines the name of the label. It must have non-zero length. value string value defines the literal value of the label. 2.1.5. .spec.postCommit Description A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . Shell script: A command as the image entrypoint: Pass arguments to the default entrypoint: Shell script with arguments: Command with arguments: It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. Type object Property Type Description args array (string) args is a list of arguments that are provided to either Command, Script or the container image's default entrypoint. The arguments are placed immediately after the command to be run. command array (string) command is the command to run. It may not be specified with Script. This might be needed if the image doesn't have /bin/sh , or if you do not want to use a shell. In all other cases, using Script might be more convenient. script string script is a shell script to be run with /bin/sh -ic . It may not be specified with Command. Use Script when a shell script is appropriate to execute the post build hook, for example for running unit tests with rake test . If you need control over the image entrypoint, or if the image does not have /bin/sh , use Command and/or Args. The -i flag is needed to support CentOS and RHEL images that use Software Collections (SCL), in order to have the appropriate collections enabled in the shell. E.g., in the Ruby image, this is necessary to make ruby , bundle and other binaries available in the PATH. 2.1.6. .spec.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 2.1.7. .spec.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 2.1.8. .spec.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 2.1.9. .spec.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 2.1.10. .spec.source Description BuildSource is the SCM used for the build. Type object Property Type Description binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. configMaps array configMaps represents a list of configMaps and their destinations that will be used for the build. configMaps[] object ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. contextDir string contextDir specifies the sub-directory where the source code for the application exists. This allows to have buildable sources in directory other than root of repository. dockerfile string dockerfile is the raw contents of a Dockerfile which should be built. When this option is specified, the FROM may be modified based on your strategy base image and additional ENV stanzas from your strategy environment will be added after the FROM, but before the rest of your Dockerfile stanzas. The Dockerfile source type may be used with other options like git - in those cases the Git repo will have any innate Dockerfile replaced in the context dir. git object GitBuildSource defines the parameters of a Git SCM images array images describes a set of images to be used to provide source for the build images[] object ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). secrets array secrets represents a list of secrets and their destinations that will be used only for the build. secrets[] object SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. sourceSecret LocalObjectReference sourceSecret is the name of a Secret that would be used for setting up the authentication for cloning private repository. The secret contains valid credentials for remote repository, where the data's key represent the authentication method to be used and value is the base64 encoded credentials. Supported auth methods are: ssh-privatekey. type string type of build input to accept 2.1.11. .spec.source.binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 2.1.12. .spec.source.configMaps Description configMaps represents a list of configMaps and their destinations that will be used for the build. Type array 2.1.13. .spec.source.configMaps[] Description ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. Type object Required configMap Property Type Description configMap LocalObjectReference configMap is a reference to an existing configmap that you want to use in your build. destinationDir string destinationDir is the directory where the files from the configmap should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. 2.1.14. .spec.source.git Description GitBuildSource defines the parameters of a Git SCM Type object Required uri Property Type Description httpProxy string httpProxy is a proxy used to reach the git repository over http httpsProxy string httpsProxy is a proxy used to reach the git repository over https noProxy string noProxy is the list of domains for which the proxy should not be used ref string ref is the branch/tag/ref to build. uri string uri points to the source that will be built. The structure of the source will depend on the type of build to run 2.1.15. .spec.source.images Description images describes a set of images to be used to provide source for the build Type array 2.1.16. .spec.source.images[] Description ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). Type object Required from Property Type Description as array (string) A list of image names that this source will be used in place of during a multi-stage container image build. For instance, a Dockerfile that uses "COPY --from=nginx:latest" will first check for an image source that has "nginx:latest" in this field before attempting to pull directly. If the Dockerfile does not reference an image source it is ignored. This field and paths may both be set, in which case the contents will be used twice. from ObjectReference from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from. paths array paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. paths[] object ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. pullSecret LocalObjectReference pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set. 2.1.17. .spec.source.images[].paths Description paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. Type array 2.1.18. .spec.source.images[].paths[] Description ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. Type object Required sourcePath destinationDir Property Type Description destinationDir string destinationDir is the relative directory within the build directory where files copied from the image are placed. sourcePath string sourcePath is the absolute path of the file or directory inside the image to copy to the build directory. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination. 2.1.19. .spec.source.secrets Description secrets represents a list of secrets and their destinations that will be used only for the build. Type array 2.1.20. .spec.source.secrets[] Description SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. Type object Required secret Property Type Description destinationDir string destinationDir is the directory where the files from the secret should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. Later, when the script finishes, all files injected will be truncated to zero length. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. secret LocalObjectReference secret is a reference to an existing secret that you want to use in your build. 2.1.21. .spec.strategy Description BuildStrategy contains the details of how to perform a build. Type object Property Type Description customStrategy object CustomBuildStrategy defines input parameters specific to Custom build. dockerStrategy object DockerBuildStrategy defines input parameters specific to container image build. jenkinsPipelineStrategy object JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines sourceStrategy object SourceBuildStrategy defines input parameters specific to an Source build. type string type is the kind of build strategy. 2.1.22. .spec.strategy.customStrategy Description CustomBuildStrategy defines input parameters specific to Custom build. Type object Required from Property Type Description buildAPIVersion string buildAPIVersion is the requested API version for the Build object serialized and passed to the custom builder env array (EnvVar) env contains additional environment variables you want to pass into a builder container. exposeDockerSocket boolean exposeDockerSocket will allow running Docker commands (and build container images) from inside the container. forcePull boolean forcePull describes if the controller should configure the build pod to always pull the images for the builder or only pull if it is not present locally from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries secrets array secrets is a list of additional secrets that will be included in the build pod secrets[] object SecretSpec specifies a secret to be included in a build pod and its corresponding mount point 2.1.23. .spec.strategy.customStrategy.secrets Description secrets is a list of additional secrets that will be included in the build pod Type array 2.1.24. .spec.strategy.customStrategy.secrets[] Description SecretSpec specifies a secret to be included in a build pod and its corresponding mount point Type object Required secretSource mountPath Property Type Description mountPath string mountPath is the path at which to mount the secret secretSource LocalObjectReference secretSource is a reference to the secret 2.1.25. .spec.strategy.dockerStrategy Description DockerBuildStrategy defines input parameters specific to container image build. Type object Property Type Description buildArgs array (EnvVar) buildArgs contains build arguments that will be resolved in the Dockerfile. See https://docs.docker.com/engine/reference/builder/#/arg for more details. NOTE: Only the 'name' and 'value' fields are supported. Any settings on the 'valueFrom' field are ignored. dockerfilePath string dockerfilePath is the path of the Dockerfile that will be used to build the container image, relative to the root of the context (contextDir). Defaults to Dockerfile if unset. env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is a reference to an DockerImage, ImageStreamTag, or ImageStreamImage which overrides the FROM image in the Dockerfile for the build. If the Dockerfile uses multi-stage builds, this will replace the image in the last FROM directive of the file. imageOptimizationPolicy string imageOptimizationPolicy describes what optimizations the system can use when building images to reduce the final size or time spent building the image. The default policy is 'None' which means the final build image will be equivalent to an image created by the container image build API. The experimental policy 'SkipLayers' will avoid commiting new layers in between each image step, and will fail if the Dockerfile cannot provide compatibility with the 'None' policy. An additional experimental policy 'SkipLayersAndWarn' is the same as 'SkipLayers' but simply warns if compatibility cannot be preserved. noCache boolean noCache if set to true indicates that the container image build must be executed with the --no-cache=true flag pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 2.1.26. .spec.strategy.dockerStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 2.1.27. .spec.strategy.dockerStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 2.1.28. .spec.strategy.dockerStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 2.1.29. .spec.strategy.dockerStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 2.1.30. .spec.strategy.dockerStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 2.1.31. .spec.strategy.jenkinsPipelineStrategy Description JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines Type object Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a build pipeline. jenkinsfile string Jenkinsfile defines the optional raw contents of a Jenkinsfile which defines a Jenkins pipeline build. jenkinsfilePath string JenkinsfilePath is the optional path of the Jenkinsfile that will be used to configure the pipeline relative to the root of the context (contextDir). If both JenkinsfilePath & Jenkinsfile are both not specified, this defaults to Jenkinsfile in the root of the specified contextDir. 2.1.32. .spec.strategy.sourceStrategy Description SourceBuildStrategy defines input parameters specific to an Source build. Type object Required from Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled incremental boolean incremental flag forces the Source build to do incremental builds if true. pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries scripts string scripts is the location of Source scripts volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 2.1.33. .spec.strategy.sourceStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 2.1.34. .spec.strategy.sourceStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 2.1.35. .spec.strategy.sourceStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 2.1.36. .spec.strategy.sourceStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 2.1.37. .spec.strategy.sourceStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 2.1.38. .spec.triggers Description triggers determine how new Builds can be launched from a BuildConfig. If no triggers are defined, a new build can only occur as a result of an explicit client build creation. Type array 2.1.39. .spec.triggers[] Description BuildTriggerPolicy describes a policy for a single trigger that results in a new Build. Type object Required type Property Type Description bitbucket object WebHookTrigger is a trigger that gets invoked using a webhook type of post generic object WebHookTrigger is a trigger that gets invoked using a webhook type of post github object WebHookTrigger is a trigger that gets invoked using a webhook type of post gitlab object WebHookTrigger is a trigger that gets invoked using a webhook type of post imageChange object ImageChangeTrigger allows builds to be triggered when an ImageStream changes type string type is the type of build trigger. Valid values: - GitHub GitHubWebHookBuildTriggerType represents a trigger that launches builds on GitHub webhook invocations - Generic GenericWebHookBuildTriggerType represents a trigger that launches builds on generic webhook invocations - GitLab GitLabWebHookBuildTriggerType represents a trigger that launches builds on GitLab webhook invocations - Bitbucket BitbucketWebHookBuildTriggerType represents a trigger that launches builds on Bitbucket webhook invocations - ImageChange ImageChangeBuildTriggerType represents a trigger that launches builds on availability of a new version of an image - ConfigChange ConfigChangeBuildTriggerType will trigger a build on an initial build config creation WARNING: In the future the behavior will change to trigger a build on any config change 2.1.40. .spec.triggers[].bitbucket Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.41. .spec.triggers[].bitbucket.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.42. .spec.triggers[].generic Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.43. .spec.triggers[].generic.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.44. .spec.triggers[].github Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.45. .spec.triggers[].github.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.46. .spec.triggers[].gitlab Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.47. .spec.triggers[].gitlab.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.48. .spec.triggers[].imageChange Description ImageChangeTrigger allows builds to be triggered when an ImageStream changes Type object Property Type Description from ObjectReference from is a reference to an ImageStreamTag that will trigger a build when updated It is optional. If no From is specified, the From image from the build strategy will be used. Only one ImageChangeTrigger with an empty From reference is allowed in a build configuration. lastTriggeredImageID string lastTriggeredImageID is used internally by the ImageChangeController to save last used image ID for build This field is deprecated and will be removed in a future release. Deprecated paused boolean paused is true if this trigger is temporarily disabled. Optional. 2.1.49. .status Description BuildConfigStatus contains current state of the build config object. Type object Required lastVersion Property Type Description imageChangeTriggers array ImageChangeTriggers captures the runtime state of any ImageChangeTrigger specified in the BuildConfigSpec, including the value reconciled by the OpenShift APIServer for the lastTriggeredImageID. There is a single entry in this array for each image change trigger in spec. Each trigger status references the ImageStreamTag that acts as the source of the trigger. imageChangeTriggers[] object ImageChangeTriggerStatus tracks the latest resolved status of the associated ImageChangeTrigger policy specified in the BuildConfigSpec.Triggers struct. lastVersion integer lastVersion is used to inform about number of last triggered build. 2.1.50. .status.imageChangeTriggers Description ImageChangeTriggers captures the runtime state of any ImageChangeTrigger specified in the BuildConfigSpec, including the value reconciled by the OpenShift APIServer for the lastTriggeredImageID. There is a single entry in this array for each image change trigger in spec. Each trigger status references the ImageStreamTag that acts as the source of the trigger. Type array 2.1.51. .status.imageChangeTriggers[] Description ImageChangeTriggerStatus tracks the latest resolved status of the associated ImageChangeTrigger policy specified in the BuildConfigSpec.Triggers struct. Type object Property Type Description from object ImageStreamTagReference references the ImageStreamTag in an image change trigger by namespace and name. lastTriggerTime Time lastTriggerTime is the last time this particular ImageStreamTag triggered a Build to start. This field is only updated when this trigger specifically started a Build. lastTriggeredImageID string lastTriggeredImageID represents the sha/id of the ImageStreamTag when a Build for this BuildConfig was started. The lastTriggeredImageID is updated each time a Build for this BuildConfig is started, even if this ImageStreamTag is not the reason the Build is started. 2.1.52. .status.imageChangeTriggers[].from Description ImageStreamTagReference references the ImageStreamTag in an image change trigger by namespace and name. Type object Property Type Description name string name is the name of the ImageStreamTag for an ImageChangeTrigger namespace string namespace is the namespace where the ImageStreamTag for an ImageChangeTrigger is located 2.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/buildconfigs GET : list or watch objects of kind BuildConfig /apis/build.openshift.io/v1/watch/buildconfigs GET : watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs DELETE : delete collection of BuildConfig GET : list or watch objects of kind BuildConfig POST : create a BuildConfig /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs GET : watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name} DELETE : delete a BuildConfig GET : read the specified BuildConfig PATCH : partially update the specified BuildConfig PUT : replace the specified BuildConfig /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs/{name} GET : watch changes to an object of kind BuildConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/build.openshift.io/v1/buildconfigs HTTP method GET Description list or watch objects of kind BuildConfig Table 2.1. HTTP responses HTTP code Reponse body 200 - OK BuildConfigList schema 401 - Unauthorized Empty 2.2.2. /apis/build.openshift.io/v1/watch/buildconfigs HTTP method GET Description watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 2.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs HTTP method DELETE Description delete collection of BuildConfig Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind BuildConfig Table 2.5. HTTP responses HTTP code Reponse body 200 - OK BuildConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a BuildConfig Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body BuildConfig schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 202 - Accepted BuildConfig schema 401 - Unauthorized Empty 2.2.4. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs HTTP method GET Description watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 2.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name} Table 2.10. Global path parameters Parameter Type Description name string name of the BuildConfig HTTP method DELETE Description delete a BuildConfig Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BuildConfig Table 2.13. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BuildConfig Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BuildConfig Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body BuildConfig schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 401 - Unauthorized Empty 2.2.6. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs/{name} Table 2.19. Global path parameters Parameter Type Description name string name of the BuildConfig HTTP method GET Description watch changes to an object of kind BuildConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
[ "\"postCommit\": { \"script\": \"rake test --verbose\", }", "The above is a convenient form which is equivalent to:", "\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }", "\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }", "Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.", "\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }", "This form is only useful if the image entrypoint can handle arguments.", "\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }", "This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.", "\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }", "This form is equivalent to appending the arguments to the Command slice." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/buildconfig-build-openshift-io-v1
F.4. About RELAY2
F.4. About RELAY2 The RELAY protocol bridges two remote clusters by creating a connection between one node in each site. This allows multicast messages sent out in one site to be relayed to the other and vice versa. JGroups includes the RELAY2 protocol, which is used for communication between sites in Red Hat JBoss Data Grid's Cross-Site Replication. The RELAY2 protocol works similarly to RELAY but with slight differences. Unlike RELAY , the RELAY2 protocol: connects more than two sites. connects sites that operate autonomously and are unaware of each other. offers both unicasts and multicast routing between sites. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/About_RELAY2
Chapter 16. Troubleshooting Network Observability
Chapter 16. Troubleshooting Network Observability To assist in troubleshooting Network Observability issues, you can perform some troubleshooting actions. 16.1. Using the must-gather tool You can use the must-gather tool to collect information about the Network Observability Operator resources and cluster-wide resources, such as pod logs, FlowCollector , and webhook configurations. Procedure Navigate to the directory where you want to store the must-gather data. Run the following command to collect cluster-wide must-gather resources: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=quay.io/netobserv/must-gather 16.2. Configuring network traffic menu entry in the OpenShift Container Platform console Manually configure the network traffic menu entry in the OpenShift Container Platform console when the network traffic menu entry is not listed in Observe menu in the OpenShift Container Platform console. Prerequisites You have installed OpenShift Container Platform version 4.10 or newer. Procedure Check if the spec.consolePlugin.register field is set to true by running the following command: USD oc -n netobserv get flowcollector cluster -o yaml Example output Optional: Add the netobserv-plugin plugin by manually editing the Console Operator config: USD oc edit console.operator.openshift.io cluster Example output Optional: Set the spec.consolePlugin.register field to true by running the following command: USD oc -n netobserv edit flowcollector cluster -o yaml Example output Ensure the status of console pods is running by running the following command: USD oc get pods -n openshift-console -l app=console Restart the console pods by running the following command: USD oc delete pods -n openshift-console -l app=console Clear your browser cache and history. Check the status of Network Observability plugin pods by running the following command: USD oc get pods -n netobserv -l app=netobserv-plugin Example output Check the logs of the Network Observability plugin pods by running the following command: USD oc logs -n netobserv -l app=netobserv-plugin Example output time="2022-12-13T12:06:49Z" level=info msg="Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info" module=main time="2022-12-13T12:06:49Z" level=info msg="listening on https://:9001" module=server 16.3. Flowlogs-Pipeline does not consume network flows after installing Kafka If you deployed the flow collector first with deploymentModel: KAFKA and then deployed Kafka, the flow collector might not connect correctly to Kafka. Manually restart the flow-pipeline pods where Flowlogs-pipeline does not consume network flows from Kafka. Procedure Delete the flow-pipeline pods to restart them by running the following command: USD oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer 16.4. Failing to see network flows from both br-int and br-ex interfaces br-ex` and br-int are virtual bridge devices operated at OSI layer 2. The eBPF agent works at the IP and TCP levels, layers 3 and 4 respectively. You can expect that the eBPF agent captures the network traffic passing through br-ex and br-int , when the network traffic is processed by other interfaces such as physical host or virtual pod interfaces. If you restrict the eBPF agent network interfaces to attach only to br-ex and br-int , you do not see any network flow. Manually remove the part in the interfaces or excludeInterfaces that restricts the network interfaces to br-int and br-ex . Procedure Remove the interfaces: [ 'br-int', 'br-ex' ] field. This allows the agent to fetch information from all the interfaces. Alternatively, you can specify the Layer-3 interface for example, eth0 . Run the following command: USD oc edit -n netobserv flowcollector.yaml -o yaml Example output 1 Specifies the network interfaces. 16.5. Network Observability controller manager pod runs out of memory You can increase memory limits for the Network Observability operator by editing the spec.config.resources.limits.memory specification in the Subscription object. Procedure In the web console, navigate to Operators Installed Operators Click Network Observability and then select Subscription . From the Actions menu, click Edit Subscription . Alternatively, you can use the CLI to open the YAML configuration for the Subscription object by running the following command: USD oc edit subscription netobserv-operator -n openshift-netobserv-operator Edit the Subscription object to add the config.resources.limits.memory specification and set the value to account for your memory requirements. See the Additional resources for more information about resource considerations: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2 1 For example, you can increase the memory limit to 800Mi . 2 This value should not be edited, but note that it changes depending on the most current release of the Operator. 16.6. Running custom queries to Loki For troubleshooting, can run custom queries to Loki. There are two examples of ways to do this, which you can adapt according to your needs by replacing the <api_token> with your own. Note These examples use the netobserv namespace for the Network Observability Operator and Loki deployments. Additionally, the examples assume that the LokiStack is named loki . You can optionally use a different namespace and naming by adapting the examples, specifically the -n netobserv or the loki-gateway URL. Prerequisites Installed Loki Operator for use with Network Observability Operator Procedure To get all available labels, run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq To get all flows from the source namespace, my-namespace , run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace="my-namespace"}' | jq Additional resources Resource considerations 16.7. Troubleshooting Loki ResourceExhausted error Loki may return a ResourceExhausted error when network flow data sent by Network Observability exceeds the configured maximum message size. If you are using the Red Hat Loki Operator, this maximum message size is configured to 100 MiB. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project drop-down menu. In the Provided APIs list, select the Network Observability Operator. Click the Flow Collector then the YAML view tab. If you are using the Loki Operator, check that the spec.loki.batchSize value does not exceed 98 MiB. If you are using a Loki installation method that is different from the Red Hat Loki Operator, such as Grafana Loki, verify that the grpc_server_max_recv_msg_size Grafana Loki server setting is higher than the FlowCollector resource spec.loki.batchSize value. If it is not, you must either increase the grpc_server_max_recv_msg_size value, or decrease the spec.loki.batchSize value so that it is lower than the limit. Click Save if you edited the FlowCollector . 16.8. Loki empty ring error The Loki "empty ring" error results in flows not being stored in Loki and not showing up in the web console. This error might happen in various situations. A single workaround to address them all does not exist. There are some actions you can take to investigate the logs in your Loki pods, and verify that the LokiStack is healthy and ready. Some of the situations where this error is observed are as follows: After a LokiStack is uninstalled and reinstalled in the same namespace, old PVCs are not removed, which can cause this error. Action : You can try removing the LokiStack again, removing the PVC, then reinstalling the LokiStack . After a certificate rotation, this error can prevent communication with the flowlogs-pipeline and console-plugin pods. Action : You can restart the pods to restore the connectivity. 16.9. Resource troubleshooting 16.10. LokiStack rate limit errors A rate-limit placed on the Loki tenant can result in potential temporary loss of data and a 429 error: Per stream rate limit exceeded (limit:xMB/sec) while attempting to ingest for stream . You might consider having an alert set to notify you of this error. For more information, see "Creating Loki rate limit alerts for the NetObserv dashboard" in the Additional resources of this section. You can update the LokiStack CRD with the perStreamRateLimit and perStreamRateLimitBurst specifications, as shown in the following procedure. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator , and select the LokiStack tab. Create or edit an existing LokiStack instance using the YAML view to add the perStreamRateLimit and perStreamRateLimitBurst specifications: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed 1 The default value for perStreamRateLimit is 3 . 2 The default value for perStreamRateLimitBurst is 15 . Click Save . Verification Once you update the perStreamRateLimit and perStreamRateLimitBurst specifications, the pods in your cluster restart and the 429 rate-limit error no longer occurs. 16.11. Running a large query results in Loki errors When running large queries for a long time, Loki errors can occur, such as a timeout or too many outstanding requests . There is no complete corrective for this issue, but there are several ways to mitigate it: Adapt your query to add an indexed filter With Loki queries, you can query on both indexed and non-indexed fields or labels. Queries that contain filters on labels perform better. For example, if you query for a particular Pod, which is not an indexed field, you can add its Namespace to the query. The list of indexed fields can be found in the "Network flows format reference", in the Loki label column. Consider querying Prometheus rather than Loki Prometheus is a better fit than Loki to query on large time ranges. However, whether or not you can use Prometheus instead of Loki depends on the use case. For example, queries on Prometheus are much faster than on Loki, and large time ranges do not impact performance. But Prometheus metrics do not contain as much information as flow logs in Loki. The Network Observability OpenShift web console automatically favors Prometheus over Loki if the query is compatible; otherwise, it defaults to Loki. If your query does not run against Prometheus, you can change some filters or aggregations to make the switch. In the OpenShift web console, you can force the use of Prometheus. An error message is displayed when incompatible queries fail, which can help you figure out which labels to change to make the query compatible. For example, changing a filter or an aggregation from Resource or Pods to Owner . Consider using the FlowMetrics API to create your own metric If the data that you need isn't available as a Prometheus metric, you can use the FlowMetrics API to create your own metric. For more information, see "FlowMetrics API Reference" and "Configuring custom metrics by using FlowMetric API". Configure Loki to improve the query performance If the problem persists, you can consider configuring Loki to improve the query performance. Some options depend on the installation mode you used for Loki, such as using the Operator and LokiStack , or Monolithic mode, or Microservices mode. In LokiStack or Microservices modes, try increasing the number of querier replicas . Increase the query timeout . You must also increase the Network Observability read timeout to Loki in the FlowCollector spec.loki.readTimeout . Additional resources Network flows format reference FlowMetric API reference Configuring custom metrics by using FlowMetric API
[ "oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather", "oc -n netobserv get flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false", "oc edit console.operator.openshift.io cluster", "spec: plugins: - netobserv-plugin", "oc -n netobserv edit flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true", "oc get pods -n openshift-console -l app=console", "oc delete pods -n openshift-console -l app=console", "oc get pods -n netobserv -l app=netobserv-plugin", "NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s", "oc logs -n netobserv -l app=netobserv-plugin", "time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server", "oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer", "oc edit -n netobserv flowcollector.yaml -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1", "oc edit subscription netobserv-operator -n openshift-netobserv-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/installing-troubleshooting
Chapter 4. Installing the distributed workloads components
Chapter 4. Installing the distributed workloads components To use the distributed workloads feature in OpenShift AI, you must install several components. Prerequisites You have logged in to OpenShift with the cluster-admin role and you can access the data science cluster. You have installed Red Hat OpenShift AI. You have sufficient resources. In addition to the minimum OpenShift AI resources described in Installing and deploying OpenShift AI (for disconnected environments, see Deploying OpenShift AI in a disconnected environment ), you need 1.6 vCPU and 2 GiB memory to deploy the distributed workloads infrastructure. You have removed any previously installed instances of the CodeFlare Operator, as described in the Knowledgebase solution How to migrate from a separately installed CodeFlare Operator in your data science cluster . If you want to use graphics processing units (GPUs), you have enabled GPU support in OpenShift AI. If you use NVIDIA GPUs, see Enabling NVIDIA GPUs . If you use AMD GPUs, see AMD GPU integration . Note Red Hat supports the use of accelerators within the same cluster only. Red Hat does not support remote direct memory access (RDMA) between accelerators, or the use of accelerators across a network, for example, by using technology such as NVIDIA GPUDirect or NVLink. If you want to use self-signed certificates, you have added them to a central Certificate Authority (CA) bundle as described in Working with certificates (for disconnected environments, see Working with certificates ). No additional configuration is necessary to use those certificates with distributed workloads. The centrally configured self-signed certificates are automatically available in the workload pods at the following mount points: Cluster-wide CA bundle: /etc/pki/tls/certs/odh-trusted-ca-bundle.crt /etc/ssl/certs/odh-trusted-ca-bundle.crt Custom CA bundle: /etc/pki/tls/certs/odh-ca-bundle.crt /etc/ssl/certs/odh-ca-bundle.crt Procedure In the OpenShift console, click Operators Installed Operators . Search for the Red Hat OpenShift AI Operator, and then click the Operator name to open the Operator details page. Click the Data Science Cluster tab. Click the default instance name (for example, default-dsc ) to open the instance details page. Click the YAML tab to show the instance specifications. Enable the required distributed workloads components. In the spec.components section, set the managementState field correctly for the required components: If you want to use the CodeFlare framework to tune models, enable the codeflare , kueue , and ray components. If you want to use the Kubeflow Training Operator to tune models, enable the kueue and trainingoperator components. The list of required components depends on whether the distributed workload is run from a pipeline or notebook or both, as shown in the following table. Table 4.1. Components required for distributed workloads Component Pipelines only Notebooks only Pipelines and notebooks codeflare Managed Managed Managed dashboard Managed Managed Managed datasciencepipelines Managed Removed Managed kueue Managed Managed Managed ray Managed Managed Managed trainingoperator Managed Managed Managed workbenches Removed Managed Managed Click Save . After a short time, the components with a Managed state are ready. Verification Check the status of the codeflare-operator-manager , kuberay-operator , and kueue-controller-manager pods, as follows: In the OpenShift console, from the Project list, select redhat-ods-applications . Click Workloads Deployments . Search for the codeflare-operator-manager , kuberay-operator , and kueue-controller-manager deployments. In each case, check the status as follows: Click the deployment name to open the deployment details page. Click the Pods tab. Check the pod status. When the status of the codeflare-operator-manager- <pod-id> , kuberay-operator- <pod-id> , and kueue-controller-manager- <pod-id> pods is Running , the pods are ready to use. To see more information about each pod, click the pod name to open the pod details page, and then click the Logs tab. Step Configure the distributed workloads feature as described in Managing distributed workloads .
[ "/etc/pki/tls/certs/odh-trusted-ca-bundle.crt /etc/ssl/certs/odh-trusted-ca-bundle.crt", "/etc/pki/tls/certs/odh-ca-bundle.crt /etc/ssl/certs/odh-ca-bundle.crt" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/installing-the-distributed-workloads-components_install
Chapter 5. Using Ansible to manage DNS locations in IdM
Chapter 5. Using Ansible to manage DNS locations in IdM As Identity Management (IdM) administrator, you can manage IdM DNS locations using the location module available in the ansible-freeipa package. DNS-based service discovery Deployment considerations for DNS locations DNS time to live (TTL) Using Ansible to ensure an IdM location is present Using Ansible to ensure an IdM location is absent 5.1. DNS-based service discovery DNS-based service discovery is a process in which a client uses the DNS protocol to locate servers in a network that offer a specific service, such as LDAP or Kerberos . One typical type of operation is to allow clients to locate authentication servers within the closest network infrastructure, because they provide a higher throughput and lower network latency, lowering overall costs. The major advantages of service discovery are: No need for clients to be explicitly configured with names of nearby servers. DNS servers are used as central providers of policy. Clients using the same DNS server have access to the same policy about service providers and their preferred order. In an Identity Management (IdM) domain, DNS service records (SRV records) exist for LDAP , Kerberos , and other services. For example, the following command queries the DNS server for hosts providing a TCP-based Kerberos service in an IdM DNS domain: Example 5.1. DNS location independent results The output contains the following information: 0 (priority): Priority of the target host. A lower value is preferred. 100 (weight). Specifies a relative weight for entries with the same priority. For further information, see RFC 2782, section 3 . 88 (port number): Port number of the service. Canonical name of the host providing the service. In the example, the two host names returned have the same priority and weight. In this case, the client uses a random entry from the result list. When the client is, instead, configured to query a DNS server that is configured in a DNS location, the output differs. For IdM servers that are assigned to a location, tailored values are returned. In the example below, the client is configured to query a DNS server in the location germany : Example 5.2. DNS location-based results The IdM DNS server automatically returns a DNS alias (CNAME) pointing to a DNS location specific SRV record which prefers local servers. This CNAME record is shown in the first line of the output. In the example, the host idmserver-01.idm.example.com has the lowest priority value and is therefore preferred. The idmserver-02.idm.example.com has a higher priority and thus is used only as backup for cases when the preferred host is unavailable. 5.2. Deployment considerations for DNS locations Identity Management (IdM) can generate location-specific service (SRV) records when using the integrated DNS. Because each IdM DNS server generates location-specific SRV records, you have to install at least one IdM DNS server in each DNS location. The client's affinity to a DNS location is only defined by the DNS records received by the client. For this reason, you can combine IdM DNS servers with non-IdM DNS consumer servers and recursors if the clients doing DNS service discovery resolve location-specific records from IdM DNS servers. In the majority of deployments with mixed IdM and non-IdM DNS services, DNS recursors select the closest IdM DNS server automatically by using round-trip time metrics. Typically, this ensures that clients using non-IdM DNS servers are getting records for the nearest DNS location and thus use the optimal set of IdM servers. 5.3. DNS time to live (TTL) Clients can cache DNS resource records for an amount of time that is set in the zone's configuration. Because of this caching, a client might not be able to receive the changes until the time to live (TTL) value expires. The default TTL value in Identity Management (IdM) is 1 day . If your client computers roam between sites, you should adapt the TTL value for your IdM DNS zone. Set the value to a lower value than the time clients need to roam between sites. This ensures that cached DNS entries on the client expire before they reconnect to another site and thus query the DNS server to refresh location-specific SRV records. Additional resources Configuration attributes of primary IdM DNS zones 5.4. Using Ansible to ensure an IdM location is present As a system administrator of Identity Management (IdM), you can configure IdM DNS locations to allow clients to locate authentication servers within the closest network infrastructure. The following procedure describes how to use an Ansible playbook to ensure a DNS location is present in IdM. The example describes how to ensure that the germany DNS location is present in IdM. As a result, you can assign particular IdM servers to this location so that local IdM clients can use them to reduce server response time. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You understand the deployment considerations for DNS locations . Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the location-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/location/ directory: Open the location-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipalocation task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the location. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Assigning an IdM server to a DNS location using the IdM Web UI Assigning an IdM server to a DNS location using the IdM CLI 5.5. Using Ansible to ensure an IdM location is absent As a system administrator of Identity Management (IdM), you can configure IdM DNS locations to allow clients to locate authentication servers within the closest network infrastructure. The following procedure describes how to use an Ansible playbook to ensure that a DNS location is absent in IdM. The example describes how to ensure that the germany DNS location is absent in IdM. As a result, you cannot assign particular IdM servers to this location and local IdM clients cannot use them. Prerequisites You know the IdM administrator password. No IdM server is assigned to the germany DNS location. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The example assumes that you have created and configured the ~/ MyPlaybooks / directory as a central location to store copies of sample playbooks. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the location-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/location/ directory: Open the location-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipalocation task section: Adapt the name of the task to correspond to your use case. Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the DNS location. Make sure that the state variable is set to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: 5.6. Additional resources See the README-location.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/location directory.
[ "dig -t SRV +short _kerberos._tcp.idm.example.com 0 100 88 idmserver-01.idm.example.com. 0 100 88 idmserver-02.idm.example.com.", "dig -t SRV +short _kerberos._tcp.idm.example.com _kerberos._tcp.germany._locations.idm.example.com. 0 100 88 idmserver-01.idm.example.com. 50 100 88 idmserver-02.idm.example.com.", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/location/location-present.yml location-present-copy.yml", "--- - name: location present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"germany\" location is present ipalocation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: germany", "ansible-playbook --vault-password-file=password_file -v -i inventory location-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/location/location-absent.yml location-absent-copy.yml", "--- - name: location absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"germany\" location is absent ipalocation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: germany state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory location-absent-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_dns_in_identity_management/using-ansible-to-manage-dns-locations-in-idm_working-with-dns-in-identity-management
Chapter 4. Viewing the Ceph overview with Datadog
Chapter 4. Viewing the Ceph overview with Datadog After installing and configuring the Datadog integration with Ceph, return to the Datadog App . The user interface will present navigation on the left side of the screen. Prerequisites Internet access. Procedure Hover over Dashboards to expose the submenu and then click Ceph Overview . Datadog presents an overview of the Ceph Storage Cluster. Click Dashboards->New Dashboard to create a custom Ceph dashboard.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_datadog_guide/viewing-the-ceph-overview-with-datadog_datadog
Chapter 1. Overview of model registries
Chapter 1. Overview of model registries Important Model registry is currently available in Red Hat OpenShift AI 2.18 as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . A model registry is an important component in the lifecycle of an artificial intelligence/machine learning (AI/ML) model, and a vital part of any machine learning operations (MLOps) platform or ML workflow. A model registry acts as a central repository, holding metadata related to machine learning models from inception to deployment. This metadata ranges from high-level information like the deployment environment and project origins, to intricate details like training hyperparameters, performance metrics, and deployment events. A model registry acts as a bridge between model experimentation and serving, offering a secure, collaborative metadata store interface for stakeholders of the ML lifecycle. Model registries provide a structured and organized way to store, share, version, deploy, and track models. To use model registries in OpenShift AI, an OpenShift cluster administrator must configure the model registry component. For more information, see Configuring the model registry component . After the model registry component is configured, an OpenShift AI administrator can create model registries in OpenShift AI and grant model registry access to the data scientists that will work with them. For more information, see Managing model registries . Data scientists with access to a model registry can store, share, version, deploy, and track models using the model registry feature. For more information, see Working with model registries .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_model_registries/overview-of-model-registries_managing-model-registries
31.3. Adding HBAC Service Entries for Custom HBAC Services
31.3. Adding HBAC Service Entries for Custom HBAC Services Only the most common services and service groups are configured for HBAC rules by default. However, you can also configure any other pluggable authentication module (PAM) service as an HBAC service. This enables you to define the custom PAM service in an HBAC rule. Note Adding a service as an HBAC service is not the same as adding a service to the domain. Adding a service to the domain (described in Section 16.1, "Adding and Editing Service Entries and Keytabs" ) makes the service a recognized resource available to other resources in the domain, but it does not enable you to use the service in HBAC rules. To add an HBAC service entry, you can use: the IdM web UI (see the section called "Web UI: Adding an HBAC Service Entry" ) the command line (see the section called "Command Line: Adding an HBAC Service Entry" ) Web UI: Adding an HBAC Service Entry Select Policy Host-Based Access Control HBAC Services . Click Add to add an HBAC service entry. Enter a name for the service, and click Add . Command Line: Adding an HBAC Service Entry Use the ipa hbacsvc-add command. For example, to add an entry for the tftp service:
[ "ipa hbacsvc-add tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/hbac-add-service
Chapter 16. Scanning pods for vulnerabilities
Chapter 16. Scanning pods for vulnerabilities Using the Red Hat Quay Container Security Operator, you can access vulnerability scan results from the OpenShift Container Platform web console for container images used in active pods on the cluster. The Red Hat Quay Container Security Operator: Watches containers associated with pods on all or specified namespaces Queries the container registry where the containers came from for vulnerability information, provided an image's registry is running image scanning (such as Quay.io or a Red Hat Quay registry with Clair scanning) Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API Using the instructions here, the Red Hat Quay Container Security Operator is installed in the openshift-operators namespace, so it is available to all namespaces on your OpenShift Container Platform cluster. 16.1. Installing the Red Hat Quay Container Security Operator You can install the Red Hat Quay Container Security Operator from the OpenShift Container Platform web console Operator Hub, or by using the CLI. Prerequisites You have installed the oc CLI. You have administrator privileges to the OpenShift Container Platform cluster. You have containers that come from a Red Hat Quay or Quay.io registry running on your cluster. Procedure You can install the Red Hat Quay Container Security Operator by using the OpenShift Container Platform web console: On the web console, navigate to Operators OperatorHub and select Security . Select the Red Hat Quay Container Security Operator Operator, and then select Install . On the Red Hat Quay Container Security Operator page, select Install . Update channel , Installation mode , and Update approval are selected automatically. The Installed Namespace field defaults to openshift-operators . You can adjust these settings as needed. Select Install . The Red Hat Quay Container Security Operator appears after a few moments on the Installed Operators page. Optional: You can add custom certificates to the Red Hat Quay Container Security Operator. For example, create a certificate named quay.crt in the current directory. Then, run the following command to add the custom certificate to the Red Hat Quay Container Security Operator: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators Optional: If you added a custom certificate, restart the Red Hat Quay Container Security Operator pod for the new certificates to take effect. Alternatively, you can install the Red Hat Quay Container Security Operator by using the CLI: Retrieve the latest version of the Container Security Operator and its channel by entering the following command: USD oc get packagemanifests container-security-operator \ -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{"\n"}{end}' \ | awk '{print "STARTING_CSV=" USD1 " CHANNEL=" USD2 }' \ | sort -Vr \ | head -1 Example output STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8 Using the output from the command, create a Subscription custom resource for the Red Hat Quay Container Security Operator and save it as container-security-operator.yaml . For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2 1 Specify the value you obtained in the step for the spec.channel parameter. 2 Specify the value you obtained in the step for the spec.startingCSV parameter. Enter the following command to apply the configuration: USD oc apply -f container-security-operator.yaml Example output subscription.operators.coreos.com/container-security-operator created 16.2. Using the Red Hat Quay Container Security Operator The following procedure shows you how to use the Red Hat Quay Container Security Operator. Prerequisites You have installed the Red Hat Quay Container Security Operator. Procedure On the OpenShift Container Platform web console, navigate to Home Overview . Under the Status section, Image Vulnerabilities provides the number of vulnerabilities found. Click Image Vulnerabilities to reveal the Image Vulnerabilities breakdown tab, which details the severity of the vulnerabilities, whether the vulnerabilities can be fixed, and the total number of vulnerabilities. You can address detected vulnerabilities in one of two ways: Select a link under the Vulnerabilities section. This takes you to the container registry that the container came from, where you can see information about the vulnerability. Select the namespace link. This takes you to the Image Manifest Vulnerabilities page, where you can see the name of the selected image and all of the namespaces where that image is running. After you have learned what images are vulnerable, how to fix those vulnerabilities, and the namespaces that the images are being run in, you can improve security by performing the following actions: Alert anyone in your organization who is running the image and request that they correct the vulnerability. Stop the images from running by deleting the deployment or other object that started the pod that the image is in. Note If you delete the pod, it might take several minutes for the vulnerability information to reset on the dashboard. 16.3. Querying image vulnerabilities from the CLI Using the oc command, you can display information about vulnerabilities detected by the Red Hat Quay Container Security Operator. Prerequisites You have installed the Red Hat Quay Container Security Operator on your OpenShift Container Platform instance. Procedure Enter the following command to query for detected container image vulnerabilities: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s To display details for a particular vulnerability, append the vulnerability name and its namespace to the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace mynamespace sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... 16.4. Uninstalling the Red Hat Quay Container Security Operator To uninstall the Container Security Operator, you must uninstall the Operator and delete the imagemanifestvulns.secscan.quay.redhat.com custom resource definition (CRD). Procedure On the OpenShift Container Platform web console, click Operators Installed Operators . Click the Options menu of the Container Security Operator. Click Uninstall Operator . Confirm your decision by clicking Uninstall in the popup window. Use the CLI to delete the imagemanifestvulns.secscan.quay.redhat.com CRD. Remove the imagemanifestvulns.secscan.quay.redhat.com custom resource definition by entering the following command: USD oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com Example output customresourcedefinition.apiextensions.k8s.io "imagemanifestvulns.secscan.quay.redhat.com" deleted
[ "oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators", "oc get packagemanifests container-security-operator -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{\"\\n\"}{end}' | awk '{print \"STARTING_CSV=\" USD1 \" CHANNEL=\" USD2 }' | sort -Vr | head -1", "STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2", "oc apply -f container-security-operator.yaml", "subscription.operators.coreos.com/container-security-operator created", "oc get vuln --all-namespaces", "NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s", "oc describe vuln --namespace mynamespace sha256.ac50e3752", "Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries", "oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com", "customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_and_compliance/pod-vulnerability-scan
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_google_cloud/viewing-odf-topology_rhodf
Manage Secrets with OpenStack Key Manager
Manage Secrets with OpenStack Key Manager Red Hat OpenStack Platform 16.0 How to integrate OpenStack Key Manager (Barbican) with your OpenStack deployment. OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/manage_secrets_with_openstack_key_manager/index
Chapter 7. Initial steps for overcloud preparation
Chapter 7. Initial steps for overcloud preparation You must complete some initial steps to prepare for the overcloud upgrade. 7.1. Preparing for overcloud service downtime The overcloud upgrade process disables the main control plane services at key points. You cannot use any overcloud services to create new resources when these key points are reached. Workloads that are running in the overcloud remain active during the upgrade process, which means instances continue to run during the upgrade of the control plane. During an upgrade of Compute nodes, these workloads can be live migrated to Compute nodes that are already upgraded. It is important to plan a maintenance window to ensure that no users can access the overcloud services during the upgrade. Affected by overcloud upgrade OpenStack Platform services Unaffected by overcloud upgrade Instances running during the upgrade Ceph Storage OSDs (backend storage for instances) Linux networking Open vSwitch networking Undercloud 7.2. Selecting Compute nodes for upgrade testing The overcloud upgrade process allows you to either: Upgrade all nodes in a role Individual nodes separately To ensure a smooth overcloud upgrade process, it is useful to test the upgrade on a few individual Compute nodes in your environment before upgrading all Compute nodes. This ensures no major issues occur during the upgrade while maintaining minimal downtime to your workloads. Use the following recommendations to help choose test nodes for the upgrade: Select two or three Compute nodes for upgrade testing Select nodes without any critical instances running If necessary, migrate critical instances from the selected test Compute nodes to other Compute nodes 7.3. Creating an overcloud inventory file Generate an Ansible inventory file of all nodes in your environment with the tripleo-ansible-inventory command. Procedure Log in to the undercloud as the stack user. Source the stackrc file. Create a static inventory file of all nodes: If you are not using the default overcloud stack name, replace <STACK NAME> with the name of your stack. To execute Ansible playbooks on your environment, run the ansible-playbook command and include the full path of the dynamic inventory tool using the -i option. For example: 7.4. Validating the pre-upgrade requirements Run the pre-upgrade validation group to check the pre-upgrade requirements. For more information about the Red Hat OpenStack Platform (RHOSP) validation framework, see Using the validation framework in the Director Installation and Usage guide. Procedure Source the stackrc file. Run the openstack tripleo validator run command with the --group pre-upgrade option and include the /usr/libexec/platform-python python runtime environment: Note Ensure that you include the list of packages that you want to check. You can use --extra-vars or --extra-vars-file in the command to supply the list through the CLI. For more information, see Command arguments for tripleo validator run . Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report: Important A FAILED validation does not prevent you from deploying or running RHOSP. However, a FAILED validation can indicate a potential issue with a production environment. 7.5. Disabling fencing in the overcloud Before you upgrade the overcloud, ensure that fencing is disabled. When you upgrade the overcloud, you upgrade each Controller node individually to retain high availability functionality. If fencing is deployed in your environment, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results. If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the upgrade to avoid any unintended results. Procedure Log in to the undercloud as the stack user. Source the stackrc file. Log in to a Controller node and run the Pacemaker command to disable fencing: USD ssh heat-admin@<controller_ip> "sudo pcs property set stonith-enabled=false" Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the openstack server list command. In the fencing.yaml environment file, set the EnableFencing parameter to false to ensure that fencing stays disabled during the upgrade process. Additional Resources Fencing Controller nodes with STONITH 7.6. Checking custom Puppet parameters If you use the ExtraConfig interfaces for customizations of Puppet parameters, Puppet might report duplicate declaration errors during the upgrade. This is due to changes in the interfaces provided by the puppet modules themselves. This procedure shows how to check for any custom ExtraConfig hieradata parameters in your environment files. Procedure Select an environment file and the check if it has an ExtraConfig parameter: If the results show an ExtraConfig parameter for any role (e.g. ControllerExtraConfig ) in the chosen file, check the full parameter structure in that file. If the parameter contains any puppet Hierdata with a SECTION/parameter syntax followed by a value , it might have been been replaced with a parameter with an actual Puppet class. For example: Check the director's Puppet modules to see if the parameter now exists within a Puppet class. For example: If so, change to the new interface. The following are examples to demonstrate the change in syntax: Example 1: Changes to: Example 2: Changes to:
[ "source ~/stackrc", "tripleo-ansible-inventory --static-yaml-inventory ~/inventory.yaml --stack <STACK_NAME> --ansible_ssh_user heat-admin", "(undercloud) USD ansible-playbook -i ~/inventory.yaml <PLAYBOOK>", "source ~/stackrc", "openstack tripleo validator run --group pre-upgrade --python-interpreter /usr/libexec/platform-python -i inventory.yaml", "openstack tripleo validator show run --full <UUID>", "source ~/stackrc", "ssh heat-admin@<controller_ip> \"sudo pcs property set stonith-enabled=false\"", "grep ExtraConfig ~/templates/custom-config.yaml", "parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'", "grep dnsmasq_local_resolv", "parameter_defaults: ExtraConfig: neutron::config::dhcp_agent_config: 'DEFAULT/dnsmasq_local_resolv': value: 'true'", "parameter_defaults: ExtraConfig: neutron::agents::dhcp::dnsmasq_local_resolv: true", "parameter_defaults: ExtraConfig: ceilometer::config::ceilometer_config: 'oslo_messaging_rabbit/rabbit_qos_prefetch_count': value: '32'", "parameter_defaults: ExtraConfig: oslo::messaging::rabbit::rabbit_qos_prefetch_count: '32'" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/initial-steps-for-overcloud-preparation_preparing-overcloud
Installing on any platform
Installing on any platform OpenShift Container Platform 4.15 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_any_platform/index
32.10. Making the Installation Tree Available
32.10. Making the Installation Tree Available The kickstart installation must access an installation tree . An installation tree is a copy of the binary Red Hat Enterprise Linux DVD with the same directory structure. If you are performing a DVD-based installation, insert the Red Hat Enterprise Linux installation DVD into the computer before starting the kickstart installation. If you are performing a hard drive installation, make sure the ISO images of the binary Red Hat Enterprise Linux DVD are on a hard drive in the computer. If you are performing a network-based (NFS, FTP or HTTP) installation, you must make the installation tree or ISO image available over the network. Refer to Section 4.1, "Preparing for a Network Installation" for details.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-install-tree
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/conscious-language-message_using-selinux
Chapter 7. Logical Volume Manager (LVM)
Chapter 7. Logical Volume Manager (LVM) 7.1. What is LVM? LVM is a method of allocating hard drive space into logical volumes that can be easily resized instead of partitions. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes . A physical volume cannot span over more than one drive. The physical volumes are combined into logical volume groups , with the exception of the /boot/ partition. The /boot/ partition cannot be on a logical volume group because the boot loader cannot read it. If the root ( / ) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group. Since a physical volume cannot span over multiple drives, to span over more than one drive, create one or more physical volumes per drive. Figure 7.1. Logical Volume Group The logical volume group is divided into logical volumes , which are assigned mount points, such as /home and / m and file system types, such as ext2 or ext3. When "partitions" reach their full capacity, free space from the logical volume group can be added to the logical volume to increase the size of the partition. When a new hard drive is added to the system, it can be added to the logical volume group, and partitions that are logical volumes can be expanded. Figure 7.2. Logical Volumes On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to another hard drive, the original hard drive space has to be reallocated as a different partition or not used. LVM support must be compiled into the kernel, and the default Red Hat kernel is compiled with LVM support. To learn how to configure LVM during the installation process, refer to Chapter 8, LVM Configuration .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/logical_volume_manager_lvm
22.7. The VDSM to Network Name Mapping Tool
22.7. The VDSM to Network Name Mapping Tool 22.7.1. Mapping VDSM Names to Logical Network Names If the name of a logical network is longer than 15 characters or contains non-ASCII characters, the system automatically generates an on-host identifier ( vdsm_name ) name; it comprises the letters on and the first 13 characters of the network's unique identifier, for example, ona1b2c3d4e5f6g . It is this name that appears in the host's log files. To view a list of logical network names and their auto-generated network name, use the VDSM-to-Network-Name Mapping tool located in /usr/share/ovirt-engine/bin/ . Procedure The first time you run the tool, define a PASSWORD environment variable, which is the password of a database user with read access to the Manager database. For example, run: Run the VDSM-to-Network-Name Mapping tool: where USER is the database user with read access to the Manager database, whose password is assigned to the PASSWORD environment variable. The tool displays a list of logical network names that are mapped to their equivalent on-host identifiers. Additional Flags You can run the tool with the following flags: --host is the hostname/IP address of the database server. The default value is localhost . --port is the port number of the database server. The default value is 5432 . --database is the name of the database. The default value is engine , which is the Manager database. --secure enables a secure connection with the database. By default the tool is run without a secure connection.
[ "export PASSWORD= DatabaseUserPassword", "vdsm_to_network_name_map --user USER" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-the_vdsm_to_network_name_tool
Chapter 4. The DM-Multipath Configuration File
Chapter 4. The DM-Multipath Configuration File By default, DM-Multipath provides configuration values for the most common uses of multipathing. In addition, DM-Multipath includes support for the most common storage arrays that support DM-Multipath. The default configuration values and the supported devices can be found in the /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults file. You can override the default configuration values for DM-Multipath by editing the /etc/multipath.conf configuration file. If necessary, you can also add a storage array that is not supported by default to the configuration file. Note You can run set up multipathing in the initramfs file system. If you run multipath from the initramfs file system and you make any changes to the multipath configuration files, you must rebuild the initramfs file system for the changes to take effect. For information on rebuilding the initramfs file system with multipath, see Section 3.4, "Setting Up Multipathing in the initramfs File System" . This chapter provides information on parsing and modifying the multipath.conf file. It contains sections on the following topics: Configuration file overview Configuration file blacklist Configuration file defaults Configuration file multipaths Configuration file devices In the multipath configuration file, you need to specify only the sections that you need for your configuration, or that you wish to change from the default values specified in the multipath.conf.defaults file. If there are sections of the file that are not relevant to your environment or for which you do not need to override the default values, you can leave them commented out, as they are in the initial file. The configuration file allows regular expression description syntax. An annotated version of the configuration file can be found in /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated . 4.1. Configuration File Overview The multipath configuration file is divided into the following sections: blacklist Listing of specific devices that will not be considered for multipath. blacklist_exceptions Listing of multipath candidates that would otherwise be blacklisted according to the parameters of the blacklist section. defaults General default settings for DM-Multipath. multipaths Settings for the characteristics of individual multipath devices. These values overwrite what is specified in the defaults and devices sections of the configuration file. devices Settings for the individual storage controllers. These values overwrite what is specified in the defaults section of the configuration file. If you are using a storage array that is not supported by default, you may need to create a devices subsection for your array. When the system determines the attributes of a multipath device, first it checks the multipath settings, then the per devices settings, then the multipath system defaults.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/mpio_configfile
2.4. Configuring a High Availability pcsd Web UI
2.4. Configuring a High Availability pcsd Web UI When you use the pcsd Web UI, you connect to one of the nodes of the cluster to display the cluster management pages. If the node to which you are connecting goes down or becomes unavailable, you can reconnect to the cluster by opening your browser to a URL that specifies a different node of the cluster. It is possible, however, to configure the pcsd Web UI itself for high availability, in which case you can continue to manage the cluster without entering a new URL. To configure the pcsd Web UI for high availability, perform the following steps. Ensure that PCSD_SSL_CERT_SYNC_ENABLED is set to true in the /etc/sysconfig/pcsd configuration file, which is the default value in RHEL 7. Enabling certificate syncing causes pcsd to sync the pcsd certificates for the cluster setup and node add commands. Create an IPaddr2 cluster resource, which is a floating IP address that you will use to connect to the pcsd Web UI. The IP address must not be one already associated with a physical node. If the IPaddr2 resource's NIC device is not specified, the floating IP must reside on the same network as one of the node's statically assigned IP addresses, otherwise the NIC device to assign the floating IP address cannot be properly detected. Create custom SSL certificates for use with pcsd and ensure that they are valid for the addresses of the nodes used to connect to the pcsd Web UI. To create custom SSL certificates, you can use either wildcard certificates or you can use the Subject Alternative Name certificate extension. For information on the Red Hat Certificate System, see the Red Hat Certificate System Administration Guide . Install the custom certificates for pcsd with the pcs pcsd certkey command. Sync the pcsd certificates to all nodes in the cluster with the pcs pcsd sync-certificates command. Connect to the pcsd Web UI using the floating IP address you configured as a cluster resource. Note Even when you configure the pcsd Web UI for high availability, you will be asked to log in again when the node to which you are connecting goes down.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-hawebui-haar
Chapter 11. 4.8 Release Notes
Chapter 11. 4.8 Release Notes 11.1. New Features The following major enhancements have been introduced in Red Hat Update Infrastructure 4.8. RHUA Server Can Be Updated During the Update Process A native Ansible module is now used to update the packages on the RHUA server, during the update process. This update can be prevented by using the --ignore-newer-rhel-packages flag. New Pulp version This update introduces a newer version of Pulp, 3.39. 11.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.8 that have a significant impact on users. Installation no longer fails on RHEL 8.10 Beta The rhui-installer failed on RHEL 8.10 Beta due to the use of distutils. This has been addressed by updating to a newer version of ansible-collection-community-crypto which does not use the distutils.
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-8-release-notes_release-notes
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/package_manifest/providing-feedback-on-red-hat-documentation_package-manifest
Chapter 30. Finding information on Kafka restarts
Chapter 30. Finding information on Kafka restarts After the Cluster Operator restarts a Kafka pod in an OpenShift cluster, it emits an OpenShift event into the pod's namespace explaining why the pod restarted. For help in understanding cluster behavior, you can check restart events from the command line. Tip You can export and monitor restart events using metrics collection tools like Prometheus. Use the metrics tool with an event exporter that can export the output in a suitable format. 30.1. Reasons for a restart event The Cluster Operator initiates a restart event for a specific reason. You can check the reason by fetching information on the restart event. Table 30.1. Restart reasons Event Description CaCertHasOldGeneration The pod is still using a server certificate signed with an old CA, so needs to be restarted as part of the certificate update. CaCertRemoved Expired CA certificates have been removed, and the pod is restarted to run with the current certificates. CaCertRenewed CA certificates have been renewed, and the pod is restarted to run with the updated certificates. ClusterCaCertKeyReplaced The key used to sign the cluster's CA certificates has been replaced, and the pod is being restarted as part of the CA renewal process. ConfigChangeRequiresRestart Some Kafka configuration properties are changed dynamically, but others require that the broker be restarted. FileSystemResizeNeeded The file system size has been increased, and a restart is needed to apply it. KafkaCertificatesChanged One or more TLS certificates used by the Kafka broker have been updated, and a restart is needed to use them. ManualRollingUpdate A user annotated the pod, or the StrimziPodSet set it belongs to, to trigger a restart. PodForceRestartOnError An error occurred that requires a pod restart to rectify. PodHasOldRevision A disk was added or removed from the Kafka volumes, and a restart is needed to apply the change. When using StrimziPodSet resources, the same reason is given if the pod needs to be recreated. PodHasOldRevision The StrimziPodSet that the pod is a member of has been updated, so the pod needs to be recreated. When using StrimziPodSet resources, the same reason is given if a disk was added or removed from the Kafka volumes. PodStuck The pod is still pending, and is not scheduled or cannot be scheduled, so the operator has restarted the pod in a final attempt to get it running. PodUnresponsive Streams for Apache Kafka was unable to connect to the pod, which can indicate a broker not starting correctly, so the operator restarted it in an attempt to resolve the issue. 30.2. Restart event filters When checking restart events from the command line, you can specify a field-selector to filter on OpenShift event fields. The following fields are available when filtering events with field-selector . regardingObject.kind The object that was restarted, and for restart events, the kind is always Pod . regarding.namespace The namespace that the pod belongs to. regardingObject.name The pod's name, for example, strimzi-cluster-kafka-0 . regardingObject.uid The unique ID of the pod. reason The reason the pod was restarted, for example, JbodVolumesChanged . reportingController The reporting component is always strimzi.io/cluster-operator for Streams for Apache Kafka restart events. source source is an older version of reportingController . The reporting component is always strimzi.io/cluster-operator for Streams for Apache Kafka restart events. type The event type, which is either Warning or Normal . For Streams for Apache Kafka restart events, the type is Normal . Note In older versions of OpenShift, the fields using the regarding prefix might use an involvedObject prefix instead. reportingController was previously called reportingComponent . 30.3. Checking Kafka restarts Use a oc command to list restart events initiated by the Cluster Operator. Filter restart events emitted by the Cluster Operator by setting the Cluster Operator as the reporting component using the reportingController or source event fields. Prerequisites The Cluster Operator is running in the OpenShift cluster. Procedure Get all restart events emitted by the Cluster Operator: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator Example showing events returned LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled You can also specify a reason or other field-selector options to constrain the events returned. Here, a specific reason is added: oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError Use an output format, such as YAML, to return more detailed information about one or more events. oc -n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml Example showing detailed events output apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: "2022-05-13T00:22:34.168086Z" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: "2022-05-13T00:22:34Z" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: "432961" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: "" selfLink: "" The following fields are deprecated, so they are not populated for these events: firstTimestamp lastTimestamp source
[ "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator", "LAST SEEN TYPE REASON OBJECT MESSAGE 2m Normal CaCertRenewed pod/strimzi-cluster-kafka-0 CA certificate renewed 58m Normal PodForceRestartOnError pod/strimzi-cluster-kafka-1 Pod needs to be forcibly restarted due to an error 5m47s Normal ManualRollingUpdate pod/strimzi-cluster-kafka-2 Pod was manually annotated to be rolled", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError", "-n kafka get events --field-selector reportingController=strimzi.io/cluster-operator,reason=PodForceRestartOnError -o yaml", "apiVersion: v1 items: - action: StrimziInitiatedPodRestart apiVersion: v1 eventTime: \"2022-05-13T00:22:34.168086Z\" firstTimestamp: null involvedObject: kind: Pod name: strimzi-cluster-kafka-1 namespace: kafka kind: Event lastTimestamp: null message: Pod needs to be forcibly restarted due to an error metadata: creationTimestamp: \"2022-05-13T00:22:34Z\" generateName: strimzi-event name: strimzi-eventwppk6 namespace: kafka resourceVersion: \"432961\" uid: 29fcdb9e-f2cf-4c95-a165-a5efcd48edfc reason: PodForceRestartOnError reportingController: strimzi.io/cluster-operator reportingInstance: strimzi-cluster-operator-6458cfb4c6-6bpdp source: {} type: Normal kind: List metadata: resourceVersion: \"\" selfLink: \"\"" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-deploy-restart-events-str
Chapter 14. Integrating with the Red Hat build of Debezium for change data capture
Chapter 14. Integrating with the Red Hat build of Debezium for change data capture The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with Streams for Apache Kafka. Following a deployment of Streams for Apache Kafka, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to Streams for Apache Kafka on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries To capture database changes, deploy Kafka Connect with a Debezium database connector. You configure a KafkaConnector resource to define the connector instance. For more information on deploying the Red Hat build of Debezium with Streams for Apache Kafka, refer to the product documentation . The documentation includes a Getting Started with Debezium guide that guides you through the process of setting up the services and connector required to view change event records for database updates.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/ref-using-debezium-str
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/pr01
function::ansi_cursor_hide
function::ansi_cursor_hide Name function::ansi_cursor_hide - Hides the cursor. Synopsis Arguments None Description Sends ansi code for hiding the cursor.
[ "ansi_cursor_hide()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ansi-cursor-hide
Chapter 4. Upgrading your system from RHEL 6 to RHEL 7
Chapter 4. Upgrading your system from RHEL 6 to RHEL 7 After you have corrected all problems reported by the Preupgrade Assistant, use the Red Hat Upgrade Tool to upgrade your system from RHEL 6.10 to RHEL 7.9. Always perform any necessary post-install tasks to ensure your system is up-to-date and to prevent upgrade-related problems. Important Test the upgrade process on a safe, non-production system before you perform it on any production system. Prerequisites You have completed the preparation steps described in Preparing a RHEL 6 system for the upgrade , including a full system backup. You have performed the pre-upgrade system assessment and resolved all reported problems. For details, see Assessing system upgrade suitability . Procedure Prepare source repositories or media with RHEL 7 packages in one of the following locations: An installation repository created from a DVD ISO where you download RHEL 7 packages, for example, an FTP server or an HTTPS site that contains the RHEL 7.9 packages. For more information, see Preparing installation sources . Mounted installation media An ISO image In any of the above options, you can configure custom repositories and additional repositories provided by Red Hat. For example, certain packages available in the RHEL 6 Base system are provided in the RHEL 7 Extras repository and are not on a RHEL 7 DVD. If you know that your system requires packages that are not in the RHEL 7 Base repository, you can install a separate RHEL 7 system to act as a yum repository that provides the required packages over FTP or HTTP. To set up an additional repository that you can use during the upgrade, follow instructions in How to create a local repository for updates . Then use the --addrepo=REPOID=URL option with the redhat-upgrade-tool command. Important It is strongly recommended to use RHEL 7.9 GA source repositories to prevent booting issues after the upgrade. For more information, see Known Issues . Disable active repositories to prevent problems with combining packages from different major releases of RHEL. Install the yum-utils package: Disable active repositories: For more information, see Can I install packages from different versions of RHEL . Run the Red Hat Upgrade Tool to download RHEL 7 packages and prepare the package installation. Specify the location of the Red Hat Enterprise Linux 7 packages: Installation repository Mounted installation media If you do not specify the device path, the Red Hat Upgrade Tool scans all mounted removable devices. ISO image Important You can use the following options with the redhat-upgrade-tool command for all three locations: --cleanup post: Automatically removes Red Hat-signed packages that do not have a RHEL 7 replacement. Recommended. If you do not use the --cleanup-post option, you must remove all remaining RHEL 6 packages after the in-place upgrade to ensure that your system is fully supported. --snapshot-root-lv and --snapshot-lv: Creates snapshots of system volumes. Snapshots are required to perform a rollback of the RHEL system in case of upgrade failure. For more information, see Rollbacks and cleanup after upgrading RHEL 6 to RHEL 7 . Reboot the system when prompted. Depending on the number of packages being upgraded, this process can take up to several hours to complete. Manually perform any post-upgrade tasks described in the pre-upgrade assessment result. If your system architecture is 64-bit Intel, upgrade from GRUB Legacy to GRUB 2. See the System Administrators Guide for more information. If Samba is installed on the upgraded host, manually run the testparm utility to verify the /etc/samba/smb.conf file. If the utility reports any configuration errors, you must fix them before you can start Samba. Optional: If you did not use the --cleanup-post option when running the Red Hat Upgrade Tool, clean up orphaned RHEL 6 packages: Warning Be careful not to accidentally remove custom packages that are compatible with RHEL 7. Warning Using the rpm command to remove orphaned packages might cause broken dependencies in some RHEL 7 packages. Refer to Fixing dependency errors for information about how to fix those dependency errors. Update your new RHEL 7 packages to their latest version. Verification Verify that the system was upgraded to the latest version of RHEL 7. Verify that the system is automatically resubscribed for RHEL 7. If the repository list does not contain RHEL repositories, run the following commands to unsubscribe the system, resubscribe the system as a RHEL 7 system, and add required repositories: If any problems occur during or after the in-place upgrade, see Troubleshooting for assistance.
[ "yum install yum-utils", "yum-config-manager --disable \\*", "redhat-upgrade-tool --network 7.9 --instrepo ftp-or-http-url --cleanup-post", "redhat-upgrade-tool --device device_path --cleanup-post", "redhat-upgrade-tool --iso iso_path --cleanup-post", "reboot", "rpm -qa | grep .el6 &> /tmp/el6.txt rpm -e USD(cat /tmp/el6.txt) --nodeps", "yum update reboot", "cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.9 (Maipo)", "yum repolist Loaded plugins: product-id, subscription-manager repo id repo name status rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs) 23,676", "subscription-manager remove --all subscription-manager unregister subscription-manager register subscription-manager attach --pool= poolID subscription-manager repos --enable= repoID" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/upgrading-your-system-from-rhel-6-to-rhel-7_upgrading-from-rhel-6-to-rhel-7
Deploying the Red Hat Quay Operator on OpenShift Container Platform
Deploying the Red Hat Quay Operator on OpenShift Container Platform Red Hat Quay 3.13 Deploying the Red Hat Quay Operator on OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/index
function::user_string_n_warn
function::user_string_n_warn Name function::user_string_n_warn - Retrieves string from user space. Synopsis Arguments addr The user space address to retrieve the string from. n The maximum length of the string (if not null terminated). General Syntax user_string_n_warn:string(addr:long, n:long) Description Returns up to n characters of a C string from a given user space memory address. Reports " <unknown> " on the rare cases when userspace data is not accessible and warns (but does not abort) about the failure.
[ "function user_string_n_warn:string(addr:long,n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-string-n-warn
B.9. Glock Statistics
B.9. Glock Statistics GFS2 maintains statistics that can help track what is going on within the file system. This allows you to spot performance issues. GFS2 maintains two counters: dcount , which counts the number of DLM operations requested. This shows how much data has gone into the mean/variance calculations. qcount , which counts the number of syscall level operations requested. Generally qcount will be equal to or greater than dcount . In addition, GFS2 maintains three mean/variance pairs. The mean/variance pairs are smoothed exponential estimates and the algorithm used is the one used to calculate round trip times in network code. The mean and variance pairs maintained in GFS2 are not scaled, but are in units of integer nanoseconds. srtt/srttvar: Smoothed round trip time for non-blocking operations srttb/srttvarb: Smoothed round trip time for blocking operations irtt/irttvar: Inter-request time (for example, time between DLM requests) A non-blocking request is one which will complete right away, whatever the state of the DLM lock in question. That currently means any requests when (a) the current state of the lock is exclusive (b) the requested state is either null or unlocked or (c) the "try lock" flag is set. A blocking request covers all the other lock requests. Larger times are better for IRTTs, whereas smaller times are better for the RTTs. Statistics are kept in two sysfs files: The glstats file. This file is similar to the glocks file, except that it contains statistics, with one glock per line. The data is initialized from "per cpu" data for that glock type for which the glock is created (aside from counters, which are zeroed). This file may be very large. The lkstats file. This contains "per cpu" stats for each glock type. It contains one statistic per line, in which each column is a cpu core. There are eight lines per glock type, with types following on from each other.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ap-glockstats-gfs2
Chapter 6. About the web terminal in the web console
Chapter 6. About the web terminal in the web console You can launch an embedded command line terminal instance in the OpenShift web console. You must first install the Web Terminal Operator to use the web terminal. Note Cluster administrators can access the web terminal in OpenShift Container Platform 4.7 and later. This terminal instance is preinstalled with common CLI tools for interacting with the cluster, such as oc , kubectl , odo , kn , tkn , helm , kubens , subctl , and kubectx . It also has the context of the project you are working on and automatically logs you in using your credentials. Important Web terminal is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 6.1. Installing the web terminal You can install the web terminal using the Web Terminal Operator listed in the OpenShift Container Platform OperatorHub. When you install the Web Terminal Operator, the custom resource definitions (CRDs) that are required for the command line configuration, such as the DevWorkspace CRD, are automatically installed. The web console creates the required resources when you open the web terminal. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for the Web Terminal Operator in the catalog, and then click the Web Terminal tile. Read the brief description about the Operator on the Web Terminal page, and then click Install . On the Install Operator page, retain the default values for all fields. The alpha option in the Update Channel menu enables installation of the latest release of the Web Terminal Operator. The All namespaces on the cluster option in the Installation Mode menu enables the Operator to watch and be available to all namespaces in the cluster. The openshift-operators option in the Installed Namespace menu installs the Operator in the default openshift-operators namespace. The Automatic option in the Approval Strategy menu ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager. Click Install . In the Installed Operators page, click the View operator to verify that the Operator is listed on the Installed Operators page. After the Operator is installed, refresh your page to see the command line terminal icon on the upper right of the console. 6.2. Using the web terminal After the Web Terminal Operator is installed, you can use the web terminal as follows: To launch the web terminal, click the command line terminal icon ( ) on the upper right of the console. A web terminal instance is displayed in the Command line terminal pane. This instance is automatically logged in with your credentials. Select the project where the DevWorkspace CR must be created from the Project drop-down list. By default, the current project is selected. Note The DevWorkspace CR is created only if it does not already exist. The openshift-terminal project is the default project used for cluster administrators. They do not have the option to choose another project. Click Start to initialize the web terminal using the selected project. After the web terminal is initialized, you can use the preinstalled CLI tools like oc , kubectl , odo , kn , tkn , helm , kubens , subctl , and kubectx in the web terminal. 6.3. Uninstalling the web terminal Uninstalling the web terminal is a two-step process: Delete the components and custom resources (CRs) that were added when you installed the Operator. Uninstall the Web Terminal Operator. Uninstalling the Web Terminal Operator does not remove any of its custom resource definitions (CRDs) or managed resources that are created when the Operator is installed. These components must be manually uninstalled for security purposes. Removing these components also allows you to save cluster resources by ensuring that terminals do not idle when the Operator is uninstalled. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. 6.3.1. Deleting the web terminal components and custom resources Use the CLI to delete the CRs that are created during installation of the Web Terminal Operator. Procedure Run the following commands to ensure that all DevWorkspace CRs are removed along with their related Kubernetes objects, such as deployments. USD oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait USD oc delete workspaceroutings.controller.devfile.io --all-namespaces --all --wait USD oc delete components.controller.devfile.io --all-namespaces --all --wait Warning If this step is not complete, finalizers make it difficult to fully uninstall the Operator easily. Run the following commands to remove the CRDs: USD oc delete customresourcedefinitions.apiextensions.k8s.io workspaceroutings.controller.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io components.controller.devfile.io USD oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io Remove the DevWorkspace-Webhook-Server deployment: USD oc delete deployment/devworkspace-webhook-server -n openshift-operators Note When you run this and the following steps, you cannot use the oc exec commands to run commands in a container. After you remove the webhooks you will be able to use the oc exec commands again. Run the following commands to remove any lingering services, secrets, and config maps: USD oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server USD oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators USD oc delete configmap devworkspace-controller -n openshift-operators USD oc delete clusterrole devworkspace-webhook-server USD oc delete clusterrolebinding devworkspace-webhook-server Run the following commands to remove mutating or validating webhook configurations: USD oc delete mutatingwebhookconfigurations controller.devfile.io USD oc delete validatingwebhookconfigurations controller.devfile.io 6.3.2. Uninstalling the Operator using the web console Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Scroll the filter list or type a keyword into the Filter by name box to find the Web Terminal Operator. Click the Options menu for the Web Terminal Operator, and then select Uninstall Operator . In the Uninstall Operator confirmation dialog box, click Uninstall to remove the Operator, Operator deployments, and pods from the cluster. The Operator stops running and no longer receives updates.
[ "oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait", "oc delete workspaceroutings.controller.devfile.io --all-namespaces --all --wait", "oc delete components.controller.devfile.io --all-namespaces --all --wait", "oc delete customresourcedefinitions.apiextensions.k8s.io workspaceroutings.controller.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io components.controller.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io", "oc delete deployment/devworkspace-webhook-server -n openshift-operators", "oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server", "oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators", "oc delete configmap devworkspace-controller -n openshift-operators", "oc delete clusterrole devworkspace-webhook-server", "oc delete clusterrolebinding devworkspace-webhook-server", "oc delete mutatingwebhookconfigurations controller.devfile.io", "oc delete validatingwebhookconfigurations controller.devfile.io" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/web_console/odc-about-web-terminal
9.6. Enabling Publishing
9.6. Enabling Publishing Publishing can be enabled for only files, only LDAP, or both. Publishing should be enabled after setting up publishers, rules, and mappers. Once enabled, the server attempts to begin publishing. If publishing was not configured correctly before being enabled, publishing may exhibit undesirable behavior or may fail. Note Configure CRLs. CRLs must be configured before they can be published. See Chapter 7, Revoking Certificates and Issuing CRLs . Log into the Certificate Manager Console. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing . The right pane shows the details for publishing to an LDAP-compliant directory. To enable publishing to a file only, select Enable Publishing . To enable LDAP publishing, select both Enable Publishing and Enable Default LDAP Connection . In the Destination section, set the information for the Directory Server instance. Host name . If the Directory Server is configured for SSL client authenticated communication, the name must match the cn component in the subject DN of the Directory Server's SSL server certificate. The hostname can be the fully-qualified domain name or an IPv4 or IPv6 address. Port number . Directory Manager DN . This is the distinguished name (DN) of the directory entry that has Directory Manager privileges. The Certificate Manager uses this DN to access the directory tree and to publish to the directory. The access control set up for this DN determines whether the Certificate Manager can perform publishing. It is possible to create another DN that has limited read-write permissions for only those attributes that the publishing system actually needs to write. Password. This is the password which the CA uses to bind to the LDAP directory to which the certificate or CRL is published. The Certificate Manager saves this password in its password.conf file. For example: Note The parameter name which identifies the publishing password ( CA LDAP Publishing ) is set in the Certificate Manager's CS.cfg file in the ca.publish.ldappublish.ldap.ldapauth.bindPWPrompt parameter, and it can be edited. Client certificate . This sets the certificate the Certificate Manager uses for SSL client authentication to the publishing directory. By default, the Certificate Manager uses its SSL server certificate. LDAP version . Select LDAP version 3. Authentication . The way the Certificate Manager authenticates to the Directory Server. The choices are Basic authentication and SSL client authentication . If the Directory Server is configured for basic authentication or for SSL communication without client authentication, select Basic authentication and specify values for the Directory manager DN and password. If the Directory Server is configured for SSL communication with client authentication, select SSL client authentication and the Use SSL communication option, and identify the certificate that the Certificate Manager must use for SSL client authentication to the directory. The server attempts to connect to the Directory Server. If the information is incorrect, the server displays an error message. Note pkiconsole is being deprecated.
[ "pkiconsole https://server.example.com:8443/ca", "CA LDAP Publishing:password" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/enabling_publishing
Creating and managing manifests for a connected Satellite Server
Creating and managing manifests for a connected Satellite Server Subscription Central 1-latest Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html-single/creating_and_managing_manifests_for_a_connected_satellite_server/index
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/proc_providing-feedback-on-red-hat-documentation_default
1.2. Why Virtualization Security Matters
1.2. Why Virtualization Security Matters Deploying virtualization in your infrastructure provides many benefits, but can also introduce new risks. Virtualized resources and services should be deployed with the following security considerations: The host and hypervisor become prime targets; they are often a single point of failure for guests and data. Virtual machines can interfere with each other in undesirable ways. If no access controls are in place to help prevent this, one malicious guest can bypass a vulnerable hypervisor and directly access other resources on the host system, such as the storage of other guests. Resources and services can become difficult to track and maintain; with rapid deployment of virtualized systems comes an increased need for management of resources, including sufficient patching, monitoring and maintenance. Resources such as storage can be spread across, and dependent upon, several machines. This can lead to overly complex environments and poorly managed and maintained systems. Virtualization does not remove any of the traditional security risks present in your environment; the entire solution stack, not just the virtualization layer, must be secured. This guide aims to assist you in mitigating your security risks by offering a number of virtualization recommended practices for Red Hat Enterprise Linux that will help you secure your virtualized infrastructure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-introduction-why_virtualization_security_matters
Chapter 1. Introducing .NET 9.0
Chapter 1. Introducing .NET 9.0 .NET is a general-purpose development platform featuring automatic memory management and modern programming languages. Using .NET, you can build high-quality applications efficiently. .NET is available on Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform through certified containers. .NET offers the following features: The ability to follow a microservices-based approach, where some components are built with .NET and others with Java, but all can run on a common, supported platform on RHEL and OpenShift Container Platform. The capacity to more easily develop new .NET workloads on Microsoft Windows. You can deploy and run your applications on either RHEL or Windows Server. A heterogeneous data center, where the underlying infrastructure is capable of running .NET applications without having to rely solely on Windows Server.
null
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_8/introducing-dotnet_getting-started-with-dotnet-on-rhel-8
Provisioning APIs
Provisioning APIs OpenShift Container Platform 4.12 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/provisioning_apis/index
Chapter 43. ContainerEnvVar schema reference
Chapter 43. ContainerEnvVar schema reference Used in: ContainerTemplate Property Property type Description name string The environment variable key. value string The environment variable value.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-containerenvvar-reference
Chapter 12. Propagating SCAP content through the load balancer
Chapter 12. Propagating SCAP content through the load balancer If you use OpenSCAP to manage security compliance on your clients, you must configure the SCAP client to send ARF reports to the load balancer instead of Capsule. The configuration procedure depends on the method you have selected to deploy compliance policies. 12.1. Propagating SCAP content using Ansible deployment Using this procedure, you can promote Security Content Automation Protocol (SCAP) content through the load balancer in the scope of the Ansible deployment method. Prerequisites Ensure that you have configured Satellite for Ansible deployment of compliance policies. For more information, see Configuring Compliance Policy Deployment Methods in Managing security compliance . Procedure In the Satellite web UI, navigate to Configure > Ansible > Variables . Search for the foreman_scap_client_port variable and click its name. In the Default Behavior area, ensure that the Override checkbox is selected. In the Parameter Type list, ensure that integer is selected. In the Default Value field, enter 9090 . In the Specify Matchers area, remove all matchers that override the default value. Click Submit . Search for the foreman_scap_client_server variable and click its name. In the Default Behavior area, ensure that the Override checkbox is selected. In the Parameter Type list, ensure that string is selected. In the Default Value field, enter the FQDN of your load balancer, such as loadbalancer.example.com . In the Specify Matchers area, remove all matchers that override the default value. Click Submit . Continue with deploying a compliance policy using Ansible. For more information, see: Deploying a Policy in a Host Group Using Ansible in Managing security compliance Deploying a Policy on a Host Using Ansible in Managing security compliance Verification On the client, verify that the /etc/foreman_scap_client/config.yaml file contains the following lines: # Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090 12.2. Propagating SCAP content using Puppet deployment Using this procedure, you can promote Security Content Automation Protocol (SCAP) content through the load balancer in the scope of the Puppet deployment method. Prerequisites Ensure that you have configured Satellite for Puppet deployment of compliance policies. For more information, see Configuring Compliance Policy Deployment Methods in Managing security compliance . Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Classes . Click foreman_scap_client . Click the Smart Class Parameter tab. In the pane to the left of the Smart Class Parameter window, click port . In the Default Behavior area, select the Override checkbox. From the Key Type list, select integer . In the Default Value field, enter 9090 . In the pane to the left of the Smart Class Parameter window, click server . In the Default Behavior area, select the Override checkbox. From the Key Type list, select string . In the Default Value field, enter the FQDN of your load balancer, such as loadbalancer.example.com . In the lower left of the Smart Class Parameter window, click Submit . Continue with deploying a compliance policy using Puppet. For more information, see: Deploying a Policy in a Host Group Using Puppet in Managing security compliance Deploying a Policy on a Host Using Puppet in Managing security compliance Verification On the client, verify that the /etc/foreman_scap_client/config.yaml file contains the following lines: # Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090
[ "Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090", "Foreman proxy to which reports should be uploaded :server: ' loadbalancer.example.com ' :port: 9090" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/propagating-scap-content-through-the-load-balancer_load-balancing
Chapter 5. LimitRange [v1]
Chapter 5. LimitRange [v1] Description LimitRange sets resource usage limits for each kind of resource in a Namespace. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LimitRangeSpec defines a min/max usage limit for resources that match on kind. 5.1.1. .spec Description LimitRangeSpec defines a min/max usage limit for resources that match on kind. Type object Required limits Property Type Description limits array Limits is the list of LimitRangeItem objects that are enforced. limits[] object LimitRangeItem defines a min/max usage limit for any resource that matches on kind. 5.1.2. .spec.limits Description Limits is the list of LimitRangeItem objects that are enforced. Type array 5.1.3. .spec.limits[] Description LimitRangeItem defines a min/max usage limit for any resource that matches on kind. Type object Required type Property Type Description default object (Quantity) Default resource requirement limit value by resource name if resource limit is omitted. defaultRequest object (Quantity) DefaultRequest is the default resource requirement request value by resource name if resource request is omitted. max object (Quantity) Max usage constraints on this kind by resource name. maxLimitRequestRatio object (Quantity) MaxLimitRequestRatio if specified, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value; this represents the max burst for the named resource. min object (Quantity) Min usage constraints on this kind by resource name. type string Type of resource that this limit applies to. 5.2. API endpoints The following API endpoints are available: /api/v1/limitranges GET : list or watch objects of kind LimitRange /api/v1/watch/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges DELETE : delete collection of LimitRange GET : list or watch objects of kind LimitRange POST : create a LimitRange /api/v1/watch/namespaces/{namespace}/limitranges GET : watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/limitranges/{name} DELETE : delete a LimitRange GET : read the specified LimitRange PATCH : partially update the specified LimitRange PUT : replace the specified LimitRange /api/v1/watch/namespaces/{namespace}/limitranges/{name} GET : watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/limitranges HTTP method GET Description list or watch objects of kind LimitRange Table 5.1. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/limitranges HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/limitranges HTTP method DELETE Description delete collection of LimitRange Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind LimitRange Table 5.5. HTTP responses HTTP code Reponse body 200 - OK LimitRangeList schema 401 - Unauthorized Empty HTTP method POST Description create a LimitRange Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body LimitRange schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 202 - Accepted LimitRange schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/limitranges HTTP method GET Description watch individual changes to a list of LimitRange. deprecated: use the 'watch' parameter with a list operation instead. Table 5.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/limitranges/{name} Table 5.10. Global path parameters Parameter Type Description name string name of the LimitRange HTTP method DELETE Description delete a LimitRange Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified LimitRange Table 5.13. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified LimitRange Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified LimitRange Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body LimitRange schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK LimitRange schema 201 - Created LimitRange schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/limitranges/{name} Table 5.19. Global path parameters Parameter Type Description name string name of the LimitRange HTTP method GET Description watch changes to an object of kind LimitRange. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/limitrange-v1
3.5.2. How This Affects Load Balancer Add-On Routing
3.5.2. How This Affects Load Balancer Add-On Routing IPVS packet forwarding only allows connections in and out of the cluster based on it recognizing its port number or its firewall mark. If a client from outside the cluster attempts to open a port IPVS is not configured to handle, it drops the connection. Similarly, if the real server attempts to open a connection back out to the Internet on a port IPVS does not know about, it drops the connection. This means all connections from FTP clients on the Internet must have the same firewall mark assigned to them and all connections from the FTP server must be properly forwarded to the Internet using network packet filtering rules. Note In order to enable passive FTP connections, ensure that you have the ip_vs_ftp kernel module loaded, which you can do by running the command modprobe ip_vs_ftp as an administrative user at a shell prompt.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-lvs-ftp-vsa
Chapter 5. Using OpenID Connect (OIDC) multitenancy
Chapter 5. Using OpenID Connect (OIDC) multitenancy This guide demonstrates how your OpenID Connect (OIDC) application can support multitenancy to serve multiple tenants from a single application. These tenants can be distinct realms or security domains within the same OIDC provider or even distinct OIDC providers. Each customer functions as a distinct tenant when serving multiple customers from the same application, such as in a SaaS environment. By enabling multitenancy support to your applications, you can support distinct authentication policies for each tenant, even authenticating against different OIDC providers, such as Keycloak and Google. To authorize a tenant by using Bearer Token Authorization, see the OpenID Connect (OIDC) Bearer token authentication guide. To authenticate and authorize a tenant by using the OIDC authorization code flow, read the OpenID Connect authorization code flow mechanism for protecting web applications guide. Also, see the OpenID Connect (OIDC) configuration properties reference guide. 5.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.9.6 A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) jq tool 5.2. Architecture In this example, we build a very simple application that supports two resource methods: /{tenant} This resource returns information obtained from the ID token issued by the OIDC provider about the authenticated user and the current tenant. /{tenant}/bearer This resource returns information obtained from the Access Token issued by the OIDC provider about the authenticated user and the current tenant. 5.3. Solution For a thorough understanding, we recommend you build the application by following the upcoming step-by-step instructions. Alternatively, if you prefer to start with the completed example, clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.8 , or download an archive . The solution is located in the security-openid-connect-multi-tenancy-quickstart directory . 5.4. Creating the Maven project First, we need a new project. Create a new project with the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-multi-tenancy-quickstart \ --extension='oidc,resteasy-reactive-jackson' \ --no-code cd security-openid-connect-multi-tenancy-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart \ -Dextensions='oidc,resteasy-reactive-jackson' \ -DnoCode cd security-openid-connect-multi-tenancy-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart" If you already have your Quarkus project configured, add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This adds the following to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 5.5. Writing the application Start by implementing the /{tenant} endpoint. As you can see from the source code below, it is just a regular Jakarta REST resource: package org.acme.quickstart.oidc; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; @Path("/{tenant}") public class HomeResource { /** * Injection point for the ID Token issued by the OIDC provider. */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the Access Token issued by the OIDC provider. */ @Inject JsonWebToken accessToken; /** * Returns the ID Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return ID Token info */ @GET @Produces("text/html") public String getIdTokenInfo() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>"); response.append("<h2>Welcome, ").append(this.idToken.getClaim("email").toString()).append("</h2>\n"); response.append("<h3>You are accessing the application within tenant <b>").append(idToken.getIssuer()).append(" boundaries</b></h3>"); return response.append("</body>").append("</html>").toString(); } /** * Returns the Access Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return Access Token info */ @GET @Produces("text/html") @Path("bearer") public String getAccessTokenInfo() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>"); response.append("<h2>Welcome, ").append(this.accessToken.getClaim("email").toString()).append("</h2>\n"); response.append("<h3>You are accessing the application within tenant <b>").append(accessToken.getIssuer()).append(" boundaries</b></h3>"); return response.append("</body>").append("</html>").toString(); } } To resolve the tenant from incoming requests and map it to a specific quarkus-oidc tenant configuration in application.properties , create an implementation for the io.quarkus.oidc.TenantConfigResolver interface, which can dynamically resolve tenant configurations: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.config.ConfigProvider; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); if (path.startsWith("/tenant-a")) { String keycloakUrl = ConfigProvider.getConfig().getValue("keycloak.url", String.class); OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId("tenant-a"); config.setAuthServerUrl(keycloakUrl + "/realms/tenant-a"); config.setClientId("multi-tenant-client"); config.getCredentials().setSecret("secret"); config.setApplicationType(ApplicationType.HYBRID); return Uni.createFrom().item(config); } else { // resolve to default tenant config return Uni.createFrom().nullItem(); } } } In the preceding implementation, tenants are resolved from the request path. If no tenant can be inferred, null is returned to indicate that the default tenant configuration should be used. The tenant-a application type is hybrid ; it can accept HTTP bearer tokens if provided. Otherwise, it initiates an authorization code flow when authentication is required. 5.6. Configuring the application # Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.application-type=web-app # Tenant A configuration is created dynamically in CustomTenantConfigResolver # HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated The first configuration is the default tenant configuration that should be used when the tenant cannot be inferred from the request. Be aware that a %prod profile prefix is used with quarkus.oidc.auth-server-url to support testing a multitenant application with Dev Services For Keycloak. This configuration uses a Keycloak instance to authenticate users. The second configuration, provided by TenantConfigResolver , is used when an incoming request is mapped to the tenant-a tenant. Both configurations map to the same Keycloak server instance while using distinct realms . Alternatively, you can configure the tenant tenant-a directly in application.properties : # Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.application-type=web-app # Tenant A configuration quarkus.oidc.tenant-a.auth-server-url=http://localhost:8180/realms/tenant-a quarkus.oidc.tenant-a.client-id=multi-tenant-client quarkus.oidc.tenant-a.application-type=web-app # HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated In that case, also use a custom TenantConfigResolver to resolve it: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } You can define multiple tenants in your configuration file. To map them correctly when resolving a tenant from your TenantResolver implementation, ensure each has a unique alias. However, using a static tenant resolution, which involves configuring tenants in application.properties and resolving them with TenantResolver , does not work for testing endpoints with Dev Services for Keycloak because it does not know how the requests are be mapped to individual tenants, and cannot dynamically provide tenant-specific quarkus.oidc.<tenant-id>.auth-server-url values. Therefore, using %prod prefixes with tenant-specific URLs within application.properties does not work in both test and development modes. Note When a current tenant represents an OIDC web-app application, the current io.vertx.ext.web.RoutingContext contains a tenant-id attribute by the time the custom tenant resolver has been called for all the requests completing the code authentication flow and the already authenticated requests, when either a tenant-specific state or session cookie already exists. Therefore, when working with multiple OIDC providers, you only need a path-specific check to resolve a tenant id if the RoutingContext does not have the tenant-id attribute set, for example: package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = context.get("tenant-id"); if (tenantId != null) { return tenantId; } else { // Initial login request String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } } This is how Quarkus OIDC resolves static custom tenants if no custom TenantResolver is registered. A similar technique can be used with TenantConfigResolver , where a tenant-id provided in the context can return OidcTenantConfig already prepared with the request. Note If you also use Hibernate ORM multitenancy or MongoDB with Panache multitenancy and both tenant ids are the same and must be extracted from the Vert.x RoutingContext , you can pass the tenant id from the OIDC Tenant Resolver to the Hibernate ORM Tenant Resolver or MongoDB with Panache Mongo Database Resolver as a RoutingContext attribute, for example: public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = extractTenantId(context); context.put("tenantId", tenantId); return tenantId; } } 5.7. Starting and configuring the Keycloak server To start a Keycloak server, you can use Docker and run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev where keycloak.version is set to 24.0.0 or higher. Access your Keycloak server at localhost:8180 . Log in as the admin user to access the Keycloak administration console. The username and password are both admin . Now, import the realms for the two tenants: Import the default-tenant-realm.json to create the default realm. Import the tenant-a-realm.json to create the realm for the tenant tenant-a . For more information, see the Keycloak documentation about how to create a new realm . 5.8. Running and using the application 5.8.1. Running in developer mode To run the microservice in dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev 5.8.2. Running in JVM mode After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then run it: java -jar target/quarkus-app/quarkus-run.jar 5.8.3. Running in native mode This same demo can be compiled into native code; no modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary, and optimized to run with minimal resources. Compilation takes a bit longer, so this step is turned off by default; let's build again by enabling the native build: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.package.type=native After a little while, you can run this binary directly: ./target/security-openid-connect-multi-tenancy-quickstart-runner 5.9. Test the application 5.9.1. Use the browser To test the application, open your browser and access the following URL: http://localhost:8080/default If everything works as expected, you are redirected to the Keycloak server to authenticate. Be aware that the requested path defines a default tenant, which we don't have mapped in the configuration file. In this case, the default configuration is used. To authenticate to the application, enter the following credentials in the Keycloak login page: Username: alice Password: alice After clicking the Login button, you are redirected back to the application. If you try now to access the application at the following URL: http://localhost:8080/tenant-a You are redirected again to the Keycloak login page. However, this time, you are going to authenticate by using a different realm. In both cases, the landing page shows the user's name and email if the user is successfully authenticated. Although alice exists in both tenants, the application treats them as distinct users in separate realms. 5.10. Static tenant configuration resolution When you set multiple tenant configurations in the application.properties file, you only need to specify how the tenant identifier gets resolved. To configure the resolution of the tenant identifier, use one of the following options: Resolve with TenantResolver Default resolution Resolve with annotations These tenant resolution options are tried in the order they are listed until the tenant id gets resolved. If the tenant id remains unresolved ( null ), the default (unnamed) tenant configuration is selected. 5.10.1. Resolve with TenantResolver The following application.properties example shows how you can resolve the tenant identifier of two tenants named a and b by using the TenantResolver method: # Tenant 'a' configuration quarkus.oidc.a.auth-server-url=http://localhost:8180/realms/quarkus-a quarkus.oidc.a.client-id=client-a quarkus.oidc.a.credentials.secret=client-a-secret # Tenant 'b' configuration quarkus.oidc.b.auth-server-url=http://localhost:8180/realms/quarkus-b quarkus.oidc.b.client-id=client-b quarkus.oidc.b.credentials.secret=client-b-secret You can return the tenant id of either a or b from quarkus.oidc.TenantResolver : import quarkus.oidc.TenantResolver; public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); if (path.endsWith("a")) { return "a"; } else if (path.endsWith("b")) { return "b"; } else { // default tenant return null; } } } In this example, the value of the last request path segment is a tenant id, but if required, you can implement a more complex tenant identifier resolution logic. 5.10.2. Default resolution The default resolution for a tenant identifier is convention based, whereby the authentication request must include the tenant identifier in the last segment of the request path. The following application.properties example shows how you can configure two tenants named google and github : # Tenant 'google' configuration quarkus.oidc.google.provider=google quarkus.oidc.google.client-id=USD{google-client-id} quarkus.oidc.google.credentials.secret=USD{google-client-secret} quarkus.oidc.google.authentication.redirect-path=/signed-in # Tenant 'github' configuration quarkus.oidc.github.provider=google quarkus.oidc.github.client-id=USD{github-client-id} quarkus.oidc.github.credentials.secret=USD{github-client-secret} quarkus.oidc.github.authentication.redirect-path=/signed-in In the provided example, both tenants configure OIDC web-app applications to use an authorization code flow to authenticate users and require session cookies to be generated after authentication. After Google or GitHub authenticates the current user, the user gets returned to the /signed-in area for authenticated users, such as a secured resource path on the JAX-RS endpoint. Finally, to complete the default tenant resolution, set the following configuration property: quarkus.http.auth.permission.login.paths=/google,/github quarkus.http.auth.permission.login.policy=authenticated If the endpoint is running on http://localhost:8080 , you can also provide UI options for users to log in to either http://localhost:8080/google or http://localhost:8080/github , without having to add specific /google or /github JAX-RS resource paths. Tenant identifiers are also recorded in the session cookie names after the authentication is completed. Therefore, authenticated users can access the secured application area without requiring either the google or github path values to be included in the secured URL. Default resolution can also work for Bearer token authentication. Still, it might be less practical because a tenant identifier must always be set as the last path segment value. 5.10.3. Resolve with annotations You can use the io.quarkus.oidc.Tenant annotation for resolving the tenant identifiers as an alternative to using io.quarkus.oidc.TenantResolver . Note Proactive HTTP authentication must be disabled ( quarkus.http.auth.proactive=false ) for this to work. For more information, see the Proactive authentication guide. Assuming your application supports two OIDC tenants, the hr and default tenants, all resource methods and classes carrying @Tenant("hr") are authenticated by using the OIDC provider configured by quarkus.oidc.hr.auth-server-url . In contrast, all other classes and methods are still authenticated by using the default OIDC provider. import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import io.quarkus.oidc.Tenant; import io.quarkus.security.Authenticated; @Authenticated @Path("/api/hello") public class HelloResource { @Tenant("hr") 1 @GET @Produces(MediaType.TEXT_PLAIN) public String sayHello() { return "Hello!"; } } 1 The io.quarkus.oidc.Tenant annotation must be placed on either the resource class or resource method. 5.11. Dynamic tenant configuration resolution If you need a more dynamic configuration for the different tenants you want to support and don't want to end up with multiple entries in your configuration file, you can use the io.quarkus.oidc.TenantConfigResolver . This interface allows you to dynamically create tenant configurations at runtime: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import java.util.function.Supplier; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TenantConfigResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); String[] parts = path.split("/"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } if ("tenant-c".equals(parts[1])) { // Do 'return requestContext.runBlocking(createTenantConfig());' // if a blocking call is required to create a tenant config, return Uni.createFromItem(createTenantConfig()); } //Resolve to default tenant configuration return null; } private Supplier<OidcTenantConfig> createTenantConfig() { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId("tenant-c"); config.setAuthServerUrl("http://localhost:8180/realms/tenant-c"); config.setClientId("multi-tenant-client"); OidcTenantConfig.Credentials credentials = new OidcTenantConfig.Credentials(); credentials.setSecret("my-secret"); config.setCredentials(credentials); // Any other setting supported by the quarkus-oidc extension return () -> config; } } The OidcTenantConfig returned by this method is the same one used to parse the oidc namespace configuration from the application.properties . You can populate it by using any settings supported by the quarkus-oidc extension. If the dynamic tenant resolver returns null , a Static tenant configuration resolution is attempted . 5.11.1. Tenant resolution for OIDC web-app applications The simplest option for resolving the OIDC web-app application configuration is to follow the steps described in the Default resolution section. Try one of the options below if the default resolution strategy does not work for your application setup. Several options are available for selecting the tenant configuration that should be used to secure the current HTTP request for both service and web-app OIDC applications, such as: Check the URL paths. For example, a tenant-service configuration must be used for the /service paths, while a tenant-manage configuration must be used for the /management paths. Check the HTTP headers. For example, with a URL path always being /service , a header such as Realm: service or Realm: management can help to select between the tenant-service and tenant-manage configurations. Check the URL query parameters. It can work similarly to the way the headers are used to select the tenant configuration. All these options can be easily implemented with the custom TenantResolver and TenantConfigResolver implementations for the OIDC service applications. However, due to an HTTP redirect required to complete the code authentication flow for the OIDC web-app applications, a custom HTTP cookie might be needed to select the same tenant configuration before and after this redirect request because: The URL path might not be the same after the redirect request if a single redirect URL has been registered in the OIDC provider. The original request path can be restored after the tenant configuration has been resolved. The HTTP headers used during the original request are unavailable after the redirect. The custom URL query parameters are restored after the redirect but only after the tenant configuration is resolved. One option to ensure the information for resolving the tenant configurations for web-app applications is available before and after the redirect is to use a cookie, for example: package org.acme.quickstart.oidc; import java.util.List; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.core.http.Cookie; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { List<String> tenantIdQuery = context.queryParam("tenantId"); if (!tenantIdQuery.isEmpty()) { String tenantId = tenantIdQuery.get(0); context.response().addCookie(Cookie.cookie("tenant", tenantId)); return tenantId; } else if (!context.request().cookies("tenant").isEmpty()) { return context.request().getCookie("tenant").getValue(); } return null; } } 5.12. Disabling tenant configurations Custom TenantResolver and TenantConfigResolver implementations might return null if no tenant can be inferred from the current request and a fallback to the default tenant configuration is required. If you expect the custom resolvers always to resolve a tenant, you do not need to configure the default tenant resolution. To turn off the default tenant configuration, set quarkus.oidc.tenant-enabled=false . Note The default tenant configuration is automatically disabled when quarkus.oidc.auth-server-url is not configured, but either custom tenant configurations are available or TenantConfigResolver is registered. Be aware that tenant-specific configurations can also be disabled, for example: quarkus.oidc.tenant-a.tenant-enabled=false . 5.13. References OIDC configuration properties Keycloak Documentation OpenID Connect JSON Web Token Google OpenID Connect Quarkus Security overview
[ "quarkus create app org.acme:security-openid-connect-multi-tenancy-quickstart --extension='oidc,resteasy-reactive-jackson' --no-code cd security-openid-connect-multi-tenancy-quickstart", "mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-multi-tenancy-quickstart -Dextensions='oidc,resteasy-reactive-jackson' -DnoCode cd security-openid-connect-multi-tenancy-quickstart", "quarkus extension add oidc", "./mvnw quarkus:add-extension -Dextensions='oidc'", "./gradlew addExtension --extensions='oidc'", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>", "implementation(\"io.quarkus:quarkus-oidc\")", "package org.acme.quickstart.oidc; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; @Path(\"/{tenant}\") public class HomeResource { /** * Injection point for the ID Token issued by the OIDC provider. */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the Access Token issued by the OIDC provider. */ @Inject JsonWebToken accessToken; /** * Returns the ID Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return ID Token info */ @GET @Produces(\"text/html\") public String getIdTokenInfo() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\"); response.append(\"<h2>Welcome, \").append(this.idToken.getClaim(\"email\").toString()).append(\"</h2>\\n\"); response.append(\"<h3>You are accessing the application within tenant <b>\").append(idToken.getIssuer()).append(\" boundaries</b></h3>\"); return response.append(\"</body>\").append(\"</html>\").toString(); } /** * Returns the Access Token info. * This endpoint exists only for demonstration purposes. * Do not expose this token in a real application. * * @return Access Token info */ @GET @Produces(\"text/html\") @Path(\"bearer\") public String getAccessTokenInfo() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\"); response.append(\"<h2>Welcome, \").append(this.accessToken.getClaim(\"email\").toString()).append(\"</h2>\\n\"); response.append(\"<h3>You are accessing the application within tenant <b>\").append(accessToken.getIssuer()).append(\" boundaries</b></h3>\"); return response.append(\"</body>\").append(\"</html>\").toString(); } }", "package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import org.eclipse.microprofile.config.ConfigProvider; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.OidcTenantConfig.ApplicationType; import io.quarkus.oidc.TenantConfigResolver; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); if (path.startsWith(\"/tenant-a\")) { String keycloakUrl = ConfigProvider.getConfig().getValue(\"keycloak.url\", String.class); OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(\"tenant-a\"); config.setAuthServerUrl(keycloakUrl + \"/realms/tenant-a\"); config.setClientId(\"multi-tenant-client\"); config.getCredentials().setSecret(\"secret\"); config.setApplicationType(ApplicationType.HYBRID); return Uni.createFrom().item(config); } else { // resolve to default tenant config return Uni.createFrom().nullItem(); } } }", "Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.application-type=web-app Tenant A configuration is created dynamically in CustomTenantConfigResolver HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated", "Default tenant configuration %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=multi-tenant-client quarkus.oidc.application-type=web-app Tenant A configuration quarkus.oidc.tenant-a.auth-server-url=http://localhost:8180/realms/tenant-a quarkus.oidc.tenant-a.client-id=multi-tenant-client quarkus.oidc.tenant-a.application-type=web-app HTTP security configuration quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated", "package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } }", "package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = context.get(\"tenant-id\"); if (tenantId != null) { return tenantId; } else { // Initial login request String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } return parts[1]; } } }", "public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String tenantId = extractTenantId(context); context.put(\"tenantId\", tenantId); return tenantId; } }", "docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev", "quarkus dev", "./mvnw quarkus:dev", "./gradlew --console=plain quarkusDev", "quarkus build", "./mvnw install", "./gradlew build", "java -jar target/quarkus-app/quarkus-run.jar", "quarkus build --native", "./mvnw install -Dnative", "./gradlew build -Dquarkus.package.type=native", "./target/security-openid-connect-multi-tenancy-quickstart-runner", "Tenant 'a' configuration quarkus.oidc.a.auth-server-url=http://localhost:8180/realms/quarkus-a quarkus.oidc.a.client-id=client-a quarkus.oidc.a.credentials.secret=client-a-secret Tenant 'b' configuration quarkus.oidc.b.auth-server-url=http://localhost:8180/realms/quarkus-b quarkus.oidc.b.client-id=client-b quarkus.oidc.b.credentials.secret=client-b-secret", "import quarkus.oidc.TenantResolver; public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { String path = context.request().path(); if (path.endsWith(\"a\")) { return \"a\"; } else if (path.endsWith(\"b\")) { return \"b\"; } else { // default tenant return null; } } }", "Tenant 'google' configuration quarkus.oidc.google.provider=google quarkus.oidc.google.client-id=USD{google-client-id} quarkus.oidc.google.credentials.secret=USD{google-client-secret} quarkus.oidc.google.authentication.redirect-path=/signed-in Tenant 'github' configuration quarkus.oidc.github.provider=google quarkus.oidc.github.client-id=USD{github-client-id} quarkus.oidc.github.credentials.secret=USD{github-client-secret} quarkus.oidc.github.authentication.redirect-path=/signed-in", "quarkus.http.auth.permission.login.paths=/google,/github quarkus.http.auth.permission.login.policy=authenticated", "import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import io.quarkus.oidc.Tenant; import io.quarkus.security.Authenticated; @Authenticated @Path(\"/api/hello\") public class HelloResource { @Tenant(\"hr\") 1 @GET @Produces(MediaType.TEXT_PLAIN) public String sayHello() { return \"Hello!\"; } }", "package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import java.util.function.Supplier; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.OidcRequestContext; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TenantConfigResolver; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantConfigResolver implements TenantConfigResolver { @Override public Uni<OidcTenantConfig> resolve(RoutingContext context, OidcRequestContext<OidcTenantConfig> requestContext) { String path = context.request().path(); String[] parts = path.split(\"/\"); if (parts.length == 0) { //Resolve to default tenant configuration return null; } if (\"tenant-c\".equals(parts[1])) { // Do 'return requestContext.runBlocking(createTenantConfig());' // if a blocking call is required to create a tenant config, return Uni.createFromItem(createTenantConfig()); } //Resolve to default tenant configuration return null; } private Supplier<OidcTenantConfig> createTenantConfig() { final OidcTenantConfig config = new OidcTenantConfig(); config.setTenantId(\"tenant-c\"); config.setAuthServerUrl(\"http://localhost:8180/realms/tenant-c\"); config.setClientId(\"multi-tenant-client\"); OidcTenantConfig.Credentials credentials = new OidcTenantConfig.Credentials(); credentials.setSecret(\"my-secret\"); config.setCredentials(credentials); // Any other setting supported by the quarkus-oidc extension return () -> config; } }", "package org.acme.quickstart.oidc; import java.util.List; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.TenantResolver; import io.vertx.core.http.Cookie; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomTenantResolver implements TenantResolver { @Override public String resolve(RoutingContext context) { List<String> tenantIdQuery = context.queryParam(\"tenantId\"); if (!tenantIdQuery.isEmpty()) { String tenantId = tenantIdQuery.get(0); context.response().addCookie(Cookie.cookie(\"tenant\", tenantId)); return tenantId; } else if (!context.request().cookies(\"tenant\").isEmpty()) { return context.request().getCookie(\"tenant\").getValue(); } return null; } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/security-openid-connect-multitenancy
Chapter 6. Known issues
Chapter 6. Known issues This section documents known issues found in this release of Red Hat Ceph Storage. 6.1. The Cephadm utility NFS-RGW issues in Red Hat Ceph Storage post-upgrade It is recommended that customers using RGW-NFS defer their upgrade until Red Hat Ceph Storage 5.1. ( BZ#1842808 ) The ceph orch host rm command does not remove the Ceph daemons in the host of a Red Hat Ceph Storage cluster The ceph orch host rm command does not provide any output. This is expected behavior to avoid the accidental removal of Ceph daemons resulting in the loss of data. To workaround this issue, the user has to remove the Ceph daemons manually. Follow the steps in the Removing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for removing the hosts of the Red Hat Ceph Storage cluster. ( BZ#1886120 ) The Ceph monitors are reported as stray daemons even after removal from the Red Hat Ceph Storage cluster Cephadm reports the Ceph monitors as stray daemons even though they have been removed from the storage cluster. To work around this issue, run the ceph mgr fail command, which allows the manager to restart and clear the error. If there is no standby manager, ceph mgr fail command makes the cluster temporarily unresponsive. ( BZ#1945272 ) Access to the Cephadm shell is lost when monitor/s are moved to node/s without _admin label After the bootstrap, access to the Cephadm shell is lost when the monitors are moved to other nodes if there is no _admin label. To workaround this issue, ensure that the destination hosts have the _admin label. ( BZ#1947497 ) Upgrade of Red Hat Ceph Storage using Cephadm gets stuck if there are no standby MDS daemons During an upgrade of a Red Hat Ceph Storage with an existing MDS service and with no active standby daemons, the process gets stuck. To workaround this issue, ensure that you have at least one standby MDS daemon before an upgrade through Cephadm. Run ceph fs status FILE_SYSTEM_NAME . If there are no standby daemons, add MDS daemons and then upgrade the storage cluster. The upgrade works as expected when standby daemons are present. ( BZ#1959354 ) The ceph orch ls command does not list the correct number of OSDs that can be created in the Red Hat Ceph Storage cluster The command ceph orch ls gives the following output: Example As per the above output, four OSDs have not started which is not correct. To workaround this issue, run the ceph -s command to see if all the OSDs are up and running in a Red Hat Ceph Storage cluster. ( BZ#1959508 ) The ceph orch osd rm help command gives an incorrect parameter description The ceph orch osd rm help command gives ceph orch osd rm SVC_ID ... [--replace] [--force] parameter instead of ceph orch osd rm OSD_ID ... [--replace] [--force] . This prompts the users to specify the SVC_ID while removing the OSDs. To workaround this issue, use the OSD identification OSD_ID parameter to remove the OSDs of a Red Hat Ceph Storage cluster. ( BZ#1966608 ) The configuration parameter osd_memory_target_autotune can be enabled With this release, osd_memory_target_autotune is disabled by default. Users can enable OSD memory autotuning by running the following command: ( BZ#1939354 ) 6.2. Ceph Dashboard Remove the services from the host before removing the hosts from the storage cluster on the Red Hat Ceph Storage Dashboard Removing the hosts on the Red Hat Ceph Storage Dashboard before removing the services causes the hosts to be in a stale, dead, or a ghost state. To workaround this issue, manually remove all the services running on the host and then remove the host from the storage cluster using the Red Hat Ceph Storage Dashboard. If you remove the host without removing the services, then to add the host again, you will have to use the command-line interface. If you remove the hosts without removing the services , you need to use the command-line interface to add the hosts again. ( BZ#1889976 ) Users cannot create snapshots of subvolumes on the Red Hat Ceph Storage Dashboard With this release, users cannot create snapshots of the subvolumes on the Red Hat Ceph Storage Dashboard. If the user creates a snapshot of the subvolumes on the dashboard, the user gets a 500 error instead of a more descriptive error message. ( BZ#1950644 ) The Red Hat Ceph Storage Dashboard displays OSDs of only the default CRUSH root children The Red Hat Ceph Storage Dashboard considers the default CRUSH root children ignoring other CRUSH types like datacenter, zones, rack, and other types. As a result, the CRUSH map viewer on the dashboard does not display OSDs which are not part of the default CRUSH root. The tree view of OSDs of the storage cluster on the Ceph dashboard now resembles the ceph osd tree output. ( BZ#1953903 ) Users cannot log in to the Red Hat Ceph Storage Dashboard with chrome extensions or plugins Users cannot log into the Red Hat Ceph Storage Dashboard if there are Chrome extensions for the plugins used in the browser. To work around this issue, either clear the cookies for a specific domain name in use or use the Incognito mode to access the Red Hat Ceph Storage Dashboard. ( BZ#1913580 ) The graphs on the Red Hat Ceph Storage Dashboard are not displayed The graphs on the Red Hat Ceph Storage Dashboard are not displayed because the grafana server certificate is not trusted on the client machine. To workaround this issue, open the Grafana URL directly in the client internet browser and accept the security exception to see the graphs on the Ceph dashboard. ( BZ#1921092 ) Incompatible approaches to manage NFS-Ganesha exports in a Red Hat Ceph Storage cluster Currently, there are two different approaches to manage NFS-Ganesha exports in a Ceph cluster. One is using the dashboard and the other is using the command-line interface. If exports are created in one way, users might not be able to manage the exports in the other way. To work around this issue, Red hat recommends to adhere to one way of deploying and managing the NFS thereby avoiding the potential duplication or management of non-modifiable NFS exports. ( BZ#1939480 ) Dashboard related URL and Grafana API URL cannot be accessed with short hostnames To workaround this issue, on the Red Hat Ceph Storage dashboard, in the Cluster drop-down menu, click Manager modules. Change the settings from short hostnames URL to FQDN URL. Disable the dashboard using ceph mgr module disable dashboard command and re-enable the dashboard module using ceph mgr module enable dashboard command. Dashboard should be able to access the Grafana API URL and the other dashboard URLs. ( BZ#1964323 ) HA-Proxy-RGW service management is not supported on the Red Hat Ceph Storage Dashboard The Red Hat Ceph Storage Dashboard does not support HA proxy service for Ceph Object Gateway. As a workaround, HA proxy-RGW service can be managed using the Cephadm CLI. You can only view the service on the Red Hat Ceph Storage dashboard. ( BZ#1968397 ) Red Hat does not support NFS exports over the Ceph File system in the back-end on the Red Hat Ceph Storage Dashboard Red Hat does not support management of NFS exports over the Ceph File System (CephFS) on the Red Hat Ceph Storage Dashboard. Currently, NFS exports with Ceph object gateway in the back-end are supported. ( BZ#1974599 ) 6.3. Ceph File System Backtrace now works as expected for CephFS scrub operations Previously, backtrace was unwritten to stable storage. Scrub activity reported a failure if the backtrace did not match the in-memory copy for a new and unsynced entry. Backtrace mismatch also happened for a stray entry that was about to be purged permanently since there was no need to save the backtrace to the disk. Due to the ongoing metadata I/O, it might have happened that the raw stats would not match if there was heavy metadata I/O because the raw stats accounting is not instantaneous. To workaround this issue, rerun the scrub when the system is idle and has had enough time to flush in-memory state to disk. As a result, once the metadata has been flushed to the disk, these errors are resolved. Backtrace validation is successful if there is no backtrace found on the disk and the file is new, and the entry is stray and about to be purged. See the KCS Ceph status shows HEALTH_ERR with MDSs report damaged metadata for more details. ( BZ#1794781 ) NFS mounts are now accessible with multiple exports Previously, when multiple CephFS exports were created, read/write to the exports would hang. As a result the NFS mounts were inaccessible. To workaround this issue, single exports are supported for Ganesha version 3.3-2 and below. With this release, multiple CephFS exports are supported when Ganesha version 3.3-3 and above is used. ( BZ#1909949 ) The cephfs-top utility displays wrong mounts and missing metrics The cephfs-top utility expects a newer kernel than what is currently shipped with Red Hat Enterprise Linux 8. The complete set of performance statistics patches are required by the cephfs-top utility. Currently, there is no workaround for this known issue. ( BZ#1946516 ) 6.4. Ceph Object Gateway The LC policy for a versioned bucket fails in between reshards Currently, the LC policy fails to work, after suspending and enabling versioning on a versioned bucket, with reshards in between. ( BZ#1962575 ) The radosgw-admin user stats command displays incorrect values for the size_utilized and size_kb_utilized fields When a user runs the radosgw-admin user stats command after adding buckets to the Red Hat Ceph Storage cluster, the output displays incorrect values in the size_utilized and size_kb_utilized fields; they are always displayed as zero. There is no workaround for this issue and users can ignore these values. ( BZ#1986160 ) 6.5. Multi-site Ceph Object Gateway 🚧 [5.0][rgw-multisite][Scale-testing][LC]: Deleting 16.5M objects via LC from the primary, does not delete the respective number of objects from secondary. | TODO https://bugzilla.redhat.com/show_bug.cgi?id=1976874 🚧 [rgw-multisite][swift-cosbench]: Size in index not reliably updated on object overwrite, leading to ambiguity in stats on primary and secondary. | TODO https://bugzilla.redhat.com/show_bug.cgi?id=1986826 6.6. The Ceph Ansible utility The rbd-mirroring does not work as expected after the upgrade from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 The cephadm-adopt playbook does not bring up rbd-mirroring after the migration of the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5. To work around this issue, add the peers manually: Syntax Example For more information, see the Adding a storage cluster peer section in the Red Hat Ceph Storage Block Device Guide. ( BZ#1967440 ) The cephadm-adopt.yml playbook currently fails when the dashboard is enabled on the Grafana node Currently, the cephadm-adopt.yml playbook fails to run as it does not create the /etc/ceph directory on nodes deployed only with a Ceph monitor. To work around this issue, manually create the /etc/ceph directory on the Ceph monitor node before running the playbook. Verify the directory is owned by the ceph user's UID and GID. ( BZ#2029697 ) 6.7. Known issues with Documentation Documentation for users to manage Ceph File system snapshots on the Red Hat Ceph Storage Dashboard Details for this feature will be included in the version of the Red Hat Ceph Storage Dashboard Guide . Documentation for users to manage hosts on the Red Hat Ceph Storage Dashboard Details for this feature will be included in the version of the Red Hat Ceph Storage Dashboard Guide . Documentation for users to import RBD images instantaneously Details for the rbd import command will be included in the version of the Red Hat Ceph Storage Block Device Guide .
[ "ceph orch ls osd.all-available-devices 12/16 4m ago 4h *", "ceph config set osd osd_memory_target_autotune true", "rbd mirror pool peer add POOL_NAME CLIENT_NAME @ CLUSTER_NAME", "rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/release_notes/known-issues
Chapter 7. Installing a cluster on Azure with network customizations
Chapter 7. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.13, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 7.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 7.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 7.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 7.4. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 7.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.5.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 7.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 7.5.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 7.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 7.5.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 1 10 14 16 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.5.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 7.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 7.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.6. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.7. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 7.8. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.9. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 7.10. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.11. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 7.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 7.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-network-customizations
Chapter 2. Migrating autoscaling from earlier RHOSP deployments
Chapter 2. Migrating autoscaling from earlier RHOSP deployments There are important differences between the autoscaling heat templates on Red Hat OpenStack Services on OpenShift (RHOSO) and versions of Red Hat OpenStack Platform (RHOSP): Updated alarm types and parameters. Groups of instances. Alarm queries. 2.1. Updating alarm types and obsolete parameters versions of Red Hat OpenStack Platform (RHOSP) used Gnocchi for metric storage. Because Red Hat OpenStack Services on OpenShift (RHOSO) uses Prometheus for metric storage instead, you need to edit the alarm types in your existing template files. Procedure Update the value of the resources.cpu_alarm_high.type parameter from OS::Aodh::GnocchiAggregationByResourcesAlarm to OS::Aodh::PrometheusAlarm . Update the value of the resources.cpu_alarm_low.type parameter from OS::Aodh::GnocchiAggregationByResourcesAlarm to OS::Aodh::PrometheusAlarm . Delete the following parameters from the alarm definitions section of the template: metric aggregation_method granularity evaluation_periods resource_type 2.2. Querying the alarms In Red Hat OpenStack Services on OpenShift (RHOSO), ensure that the queries in the alarm definitions are compatible with PromQL. Procedure Include server_group=<stack_id> in the queries to select only the instances in the stack. The ceilometer_cpu metric is stored as an accumulated value recording the CPU time total since the instance was created. You can use the PromQL rate() function to query only for a specified CPU time or a percentage value of CPU time for a certain duration. In Red Hat OpenStack Platform (RHOSP) versions, this was not automatic, but now you can automatically compute the percentage as part of the query. You can use the PromQL rate() function like this: rate(ceilometer_cpu{server_group=<stack_id>}[<duration>]) . To avoid unstable query results, calculate the <duration> by adding <ceilometer polling interval> to <prometheus scrape interval>. Optional: You can use the USD openstack metric query command to check what the new queries return. Ensure that you test your queries. You can check that only instances from the relevant stack are queried, take note of the metric values returned by the queries, and ensure that you set the thresholds in the alarm definitions accordingly. autoscaling.yaml 47,49c50,53 < list_join: < - '' < - - {'=': {server_group: {get_param: "OS::stack_id"}}} --- > str_replace: > template: "(rate(ceilometer_cpu{server_group='stack_id'}[150s]))/10000000" > params: > stack_id: {get_param: OS::stack_id} 68,70c67,70 < list_join: < - '' < - - {'=': {server_group: {get_param: "OS::stack_id"}}} --- > str_replace: > template: "(rate(ceilometer_cpu{server_group='stack_id'}[150s]))/10000000" > params: > stack_id: {get_param: OS::stack_id}
[ "47,49c50,53 < list_join: < - '' < - - {'=': {server_group: {get_param: \"OS::stack_id\"}}} --- > str_replace: > template: \"(rate(ceilometer_cpu{server_group='stack_id'}[150s]))/10000000\" > params: > stack_id: {get_param: OS::stack_id} 68,70c67,70 < list_join: < - '' < - - {'=': {server_group: {get_param: \"OS::stack_id\"}}} --- > str_replace: > template: \"(rate(ceilometer_cpu{server_group='stack_id'}[150s]))/10000000\" > params: > stack_id: {get_param: OS::stack_id}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/autoscaling_for_instances/migrating-autoscaling-from-earlier-rhosp-deployments_osp
Chapter 3. Customizing GitLab pipelines and rebuilding container images
Chapter 3. Customizing GitLab pipelines and rebuilding container images When customizing pipelines in GitLab, rebuilding container image is a critical step to ensure your changes are included in the updated workflow. Pipelines in GitLab rely on specific container image, such as the GitLab runner image, to execute the tasks defined in your CI/CD workflow. The GitLab runner image provides the necessary tools, configuration, and scripts required for your pipeline to run. If you modify tasks in the pipeline, you must update and rebuild the container image to include those changes. This ensures that you pipeline tasks are properly executed when the pipeline is triggered. Rebuilding container image involves the following steps: Running podman to extract the existing pipeline files. Customizing the pipeline tasks as per your requirements. Rebuilding the container image. Pushing the updated image to a container registry. Prerequisites Before making changes, ensure that: You have podman installed on your system. You have login credentials for Quay.io to push the updated image. You have forked and cloned the Sample templates repository. Your forked repository is up to date and synced with the upstream repository. Procedure Create a directory for the build resources. The location of the directory depends on your role: Platform engineer: If you are rebuilding the image to update the default pipeline for your organization, you may create the directory within the location where you forked the tssc-sample-templates repository. For example, rebuild-image . Developers: If you are rebuilding the image for your own use or a specific project, create a directory in a location outside the organization-wide fork of tssc-sample-templates to avoid conflicts. mkdir rebuild-image Run the following command and copy the container image URL: more ../tssc-sample-templates/skeleton/ci/source-repo/gitlabci/.gitlab-ci.yml # Example: ... image: quay.io/redhat-appstudio/rhtap-task-runner:latest ... Copy the existing pipeline content locally by running the following command: podman run -v USD(pwd):/pwd:z <The_url_you_copied_in_step_2> cp -r /work/rhtap /pwd 1 1 This command starts the container using the <the_url_you_copied_in_step_2> image. It then copies the pipeline files from /work/rhtap inside the container to your working directory (USD(pwd)). Additionally, if you are not on a system with SELinux enabled (for example, Fedora, RHEL, pr CentOS), remove the :z option in the command to ensure compatibility. Navigate to the rhtap directory and customize the pipelines tasks as required. Create a Dockerfile in the rebuild-image directory and add the following content to include your changes in the container. Additionally, the base image quay.io/redhat-appstudio/rhtap-task-runner:latest is built on ubi/ubi-minimal , which provides a lightweight Universal Base Image (UBI). If you need to install additional software and dependencies, use microdnf . An example command: FROM quay.io/redhat-appstudio/rhtap-task-runner:latest 1 #Copy the updated pipeline tasks COPY ./rhtap /work/rhtap 2 # Example: Install additional software (for example, git) RUN microdnf -y install make 3 1 Uses the existing rhtap-runner container as the starting point. 2 Copies the updated pipeline tasks from the local rhtap folder into the /work/rhtap directory inside the container. 3 Demonstrates the use of the microdnf to install additional software or dependencies. Build the new container image with the following command: podman build -f Dockerfile -t quay.io/<namespace>/<new_image_name> 1 1 -f Dockerfile specifies the Dockerfile to use for the build the container image. The -f flag allows you to explicitly point to the Dockerfile if it's not named Dockerfile or is located in a different directory. -t quay.io/<namespace>/<new_image_name> assigns a tag (name) to the container image for easy identification. Login to Quay.io and push the updated image: podman login quay.io # Enter username and password podman push quay.io/<namespace>/<new_image_name> 1 1 This uploads the customized container image to Quay.io, making it available for use in GitLab pipelines. Platform engineer only: Navigate to tssc-sample-templates > skeleton > ci > source-repo > gitlabci > .gitlab-ci.yml file and replace the container image URL with the one you just created. This makes the new container image as the default. image: quay.io/<namespace>/<new_image_name> For developers only: To update the container image just for a single repository in GitLab: Navigate to the source repository > .gitlab-ci.yaml and replace the container image URL with the one you just created. image: quay.io/<namespace>/<new_image_name> Commit and push the changes to apply the updated pipeline configuration.
[ "mkdir rebuild-image", "more ../tssc-sample-templates/skeleton/ci/source-repo/gitlabci/.gitlab-ci.yml Example: image: quay.io/redhat-appstudio/rhtap-task-runner:latest", "run -v USD(pwd):/pwd:z <The_url_you_copied_in_step_2> cp -r /work/rhtap /pwd 1", "FROM quay.io/redhat-appstudio/rhtap-task-runner:latest 1 #Copy the updated pipeline tasks COPY ./rhtap /work/rhtap 2 Example: Install additional software (for example, git) RUN microdnf -y install make 3", "build -f Dockerfile -t quay.io/<namespace>/<new_image_name> 1", "login quay.io Enter username and password push quay.io/<namespace>/<new_image_name> 1", "image: quay.io/<namespace>/<new_image_name>", "image: quay.io/<namespace>/<new_image_name>" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/customizing_red_hat_trusted_application_pipeline/customizing-gitlab-pipelines-and-rebuilding-container-images_default
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/making-open-source-more-inclusive
Chapter 1. New features and enhancements
Chapter 1. New features and enhancements Red Hat JBoss Core Services (JBCS) 2.4.57 Service Pack 3 includes the following new features and enhancements. 1.1. ResponseStatusCodeOnNoContext directive for mod_proxy_cluster From the JBCS 2.4.57 Service Pack 3 release onward, the mod_proxy_cluster module also includes a ResponseStatusCodeOnNoContext directive. You can use the ResponseStatusCodeOnNoContext directive to specify the response status code that the server sends to the client when a ProxyPass or ProxyPassMatch directive does not have a matching context. The default status code is 404 . Note In JBCS 2.4.51 or earlier, when a ProxyPass or ProxyPassMatch directive did not have a matching context, the server sent a 503 status code by default. If you want to preserve the default behavior that was available in earlier releases, set the ResponseStatusCodeOnNoContext directive to 503 instead. If you specify a value other than a standard HTTP response code, the server access log shows the specified value but the server sends a 500 Internal Server Error response to the client.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_3_release_notes/new_features_and_enhancements
Chapter 4. Handling a machine configuration for hosted control planes
Chapter 4. Handling a machine configuration for hosted control planes In a standalone OpenShift Container Platform cluster, a machine config pool manages a set of nodes. You can handle a machine configuration by using the MachineConfigPool custom resource (CR). Tip You can reference any machineconfiguration.openshift.io resources in the nodepool.spec.config field of the NodePool CR. In hosted control planes, the MachineConfigPool CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools. 4.1. Configuring node pools for hosted control planes On hosted control planes, you can configure node pools by creating a MachineConfig object inside of a config map in the management cluster. Procedure To create a MachineConfig object inside of a config map in the management cluster, enter the following information: apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: USD{PATH} 1 1 Sets the path on the node where the MachineConfig object is stored. After you add the object to the config map, you can apply the config map to the node pool as follows: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 4.2. Referencing the kubelet configuration in node pools To reference your kubelet configuration in node pools, you add the kubelet configuration in a config map and then apply the config map in the NodePool resource. Procedure Add the kubelet configuration inside of a config map in the management cluster by entering the following information: Example ConfigMap object with the kubelet configuration apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: "example.sh/unregistered" value: "true" effect: "NoExecute" 1 Replace <configmap_name> with the name of your config map. 2 Replace <kubeletconfig_name> with the name of the KubeletConfig resource. Apply the config map to the node pool by entering the following command: USD oc edit nodepool <nodepool_name> --namespace clusters 1 1 Replace <nodepool_name> with the name of your node pool. Example NodePool resource configuration apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 4.3. Configuring node tuning in a hosted cluster To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned objects and referencing those config maps in your node pools. Procedure Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned manifest defines a profile that sets vm.dirty_ratio to 55 on nodes that contain the tuned-1-node-label node label with any value. Save the following ConfigMap manifest in a file named tuned-1.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile Note If you do not add any labels to an entry in the spec.recommend section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the spec.recommend section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned .spec.recommend.match section, node labels will not persist during an upgrade unless you set the .spec.management.upgradeType value of the node pool to InPlace . Create the ConfigMap object in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f tuned-1.yaml Reference the ConfigMap object in the spec.tuningConfig field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool , named nodepool-1 , which contains 2 nodes. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ... Note You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster. Verification Now that you have created the ConfigMap object that contains a Tuned manifest and referenced it in a NodePool , the Node Tuning Operator syncs the Tuned objects into the hosted cluster. You can verify which Tuned objects are defined and which TuneD profiles are applied to each node. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME AGE default 7m36s rendered 7m36s tuned-1 65s List the Profile objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s Note If no custom profiles are created, the openshift-node profile is applied by default. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values: USD oc --kubeconfig="USDHC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio Example output vm.dirty_ratio = 55 4.4. Deploying the SR-IOV Operator for hosted control planes Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. Prerequisites You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview) . Procedure Create a namespace and an Operator group: apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator Create a subscription to the SR-IOV Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace Verification To verify that the SR-IOV Operator is ready, run the following command and view the resulting output: USD oc get csv -n openshift-sriov-network-operator Example output NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.16.0-202211021237 SR-IOV Network Operator 4.16.0-202211021237 sriov-network-operator.4.16.0-202210290517 Succeeded To verify that the SR-IOV pods are deployed, run the following command: USD oc get pods -n openshift-sriov-network-operator
[ "apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1", "oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: \"example.sh/unregistered\" value: \"true\" effect: \"NoExecute\"", "oc edit nodepool <nodepool_name> --namespace clusters 1", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile", "oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 7m36s rendered 7m36s tuned-1 65s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio", "vm.dirty_ratio = 55", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-sriov-network-operator", "NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.16.0-202211021237 SR-IOV Network Operator 4.16.0-202211021237 sriov-network-operator.4.16.0-202210290517 Succeeded", "oc get pods -n openshift-sriov-network-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/handling-a-machine-configuration-for-hosted-control-planes
Chapter 1. Metadata APIs
Chapter 1. Metadata APIs 1.1. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object 1.3. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 1.4. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 1.5. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object 1.6. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.7. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.8. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 1.9. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/metadata-apis
Chapter 6. Installing a cluster on IBM Cloud with network customizations
Chapter 6. Installing a cluster on IBM Cloud with network customizations In OpenShift Container Platform version 4.18, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on IBM Cloud(R). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 6.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 6.6.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 Additional resources Optimizing storage 6.6.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: 9 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 11 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 12 fips: false 13 sshKey: ssh-ed25519 AAAA... 14 1 8 11 12 Required. The installation program prompts you for this value. 2 5 9 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 14 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.8. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork nodeNetworking For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 6.9. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 6.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 6.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 6.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 6.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 6.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 6.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: 9 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 11 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 12 fips: false 13 sshKey: ssh-ed25519 AAAA... 14", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_cloud/installing-ibm-cloud-network-customizations
Chapter 6. Fixed security issues
Chapter 6. Fixed security issues This section lists security issues fixed in Red Hat Developer Hub 1.4. 6.1. Red Hat Developer Hub 1.4.2 6.1.1. Red Hat Developer Hub dependency updates CVE-2025-22150 A flaw was found in the undici package for Node.js. Undici uses Math.random() to choose the boundary for a multipart/form-data request. It is known that the output of Math.random() can be predicted if several of its generated values are known. If an app has a mechanism that sends multipart requests to an attacker-controlled website, it can leak the necessary values. Therefore, an attacker can tamper with the requests going to the backend APIs if certain conditions are met. 6.2. Red Hat Developer Hub 1.4.1 6.2.1. Red Hat Developer Hub dependency updates CVE-2024-45338 A flaw was found in golang.org/x/net/html. This flaw allows an attacker to craft input to the parse functions that would be processed non-linearly with respect to its length, resulting in extremely slow parsing. This issue can cause a denial of service. CVE-2024-52798 A flaw was found in path-to-regexp. A path-to-regexp turns path strings into regular expressions. In certain cases, path-to-regexp will output a regular expression that can be exploited to cause poor performance. CVE-2024-55565 nanoid (aka Nano ID) before 5.0.9 mishandles non-integer values. 3.3.8 is also a fixed version. CVE-2024-56201 A flaw was found in the Jinja2 package. A bug in the Jinja compiler allows an attacker that controls both the content and filename of a template to execute arbitrary Python code, regardless of Jinja's sandbox being used. An attacker needs to be able to control both the filename and the contents of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications that execute untrusted templates where the template author can also choose the template filename. CVE-2024-56326 A flaw was found in the Jinja package. In affected versions of Jinja, an oversight in how the Jinja sandboxed environment detects calls to str.format allows an attacker that controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications that execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, storing a reference to a malicious string's format method is possible, then passing that to a filter that calls it. No such filters are built into Jinja but could be present through custom filters in an application. After the fix, such indirect calls are also handled by the sandbox. CVE-2024-56334 A flaw was found in the systeminformation library for Node.js. In Windows systems, the SSID parameter of the getWindowsIEEE8021x function is not sanitized before it is passed to cmd.exe. This may allow a remote attacker to execute arbitrary commands on the target system. 6.3. Red Hat Developer Hub 1.4.0 6.3.1. Red Hat Developer Hub dependency updates CVE-2024-21536 A flaw was found in the http-proxy-middleware package. Affected versions of this package are vulnerable to denial of service (DoS) due to an UnhandledPromiseRejection error thrown by micromatch. This flaw allows an attacker to kill the Node.js process and crash the server by requesting certain paths. CVE-2024-21538 A Regular Expression Denial of Service (ReDoS) vulnerability was found in the cross-spawn package for Node.js. Due to improper input sanitization, an attacker can increase CPU usage and crash the program with a large, specially crafted string. CVE-2024-45296 A flaw was found in path-to-regexp package, where it turns path strings into regular expressions. In certain cases, path-to-regexp will output a regular expression that can be exploited to cause poor performance. Because JavaScript is single-threaded and regex matching runs on the main thread, poor performance will block the event loop and lead to a denial of service (DoS). CVE-2024-45590 A flaw was found in body-parser. This vulnerability causes denial of service via a specially crafted payload when the URL encoding is enabled. CVE-2024-45815 A flaw was found in the backstage/plugin-catalog-backend package. A malicious actor with authenticated access to a Backstage instance with the catalog backend plugin installed is able to interrupt the service using a specially crafted query to the catalog API. CVE-2024-45816 A directory traversal vulnerability was found in the backstage/plugin-techdocs-backend package. When using the AWS S3 or GCS storage provider for TechDocs, it is possible to access content in the entire storage bucket. This can leak contents of the bucket that are not intended to be accessible, as well as bypass permission checks in Backstage. CVE-2024-46976 A flaw was found in the backstage/plugin-techdocs-backend package. An attacker with control of the contents of the TechDocs storage buckets may be able to inject executable scripts in the TechDocs content that will be executed in the victim's browser when browsing documentation or navigating to an attacker provided link. CVE-2024-47762 A flaw was found in the backstage/plugin-app-backend package. Configurations supplied through APP_CONFIG_* environment variables unexpectedly ignore the visibility defined in the configuration schema, potentially exposing sensitive configuration details intended to remain private or restricted to backend processes.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/release_notes/fixed-security-issues
25.8. Debugging Rsyslog
25.8. Debugging Rsyslog To run rsyslogd in debugging mode, use the following command: rsyslogd -dn With this command, rsyslogd produces debugging information and prints it to the standard output. The -n stands for "no fork". You can modify debugging with environmental variables, for example, you can store the debug output in a log file. Before starting rsyslogd , type the following on the command line: Replace path with a desired location for the file where the debugging information will be logged. For a complete list of options available for the RSYSLOG_DEBUG variable, see the related section in the rsyslogd(8) manual page. To check if syntax used in the /etc/rsyslog.conf file is valid use: rsyslogd -N 1 Where 1 represents level of verbosity of the output message. This is a forward compatibility option because currently, only one level is provided. However, you must add this argument to run the validation.
[ "export RSYSLOG_DEBUGLOG=\" path \" export RSYSLOG_DEBUG=\"Debug\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-debugging_rsyslog
6.12. Setting Up New Key Sets
6.12. Setting Up New Key Sets This section describes setting up an alternative to the default key set in the Token Processing System (TPS) and in the Token Key Service (TKS). TKS configuration The default key set is configured in the TKS using the following options in the /var/lib/pki/ instance_name /tks/conf/CS.cfg file: The above configuration defines settings specific to a certain type or class of tokens that can be used in the TMS. The most important part are the 3 developer or (out of the box) session keys, which are used to create a secure channel before symmetric key handover takes place. A different type of key may have different default values for these keys. The settings describing the nistSP800 key diversification method control whether this method or the standard Visa method is used. Specifically, the value of the tks.defKeySet.nistSP800-108KdfOnKeyVersion option determines that the NIST version will be used. The nistSP800-108KdfUseCuidAsKdd option allows you to use the legacy key ID value of CUID during processing. The newer KDD value is most commonly used and therefore this option is disabled ( false ) by default. This allows you to configure a new key set to enable support for a new class of keys. Example 6.2. Enabling Support for the jForte Class To enable support for the jForte class, set: Note the difference in the 3 static session keys compared to the example. Certificate System supports the Secure Channel Protocol 03 (SCP03) for Giesecke & Devrient (G&D) Smart Cafe 6 smart cards. To enable SCP03 support for these smart cards in a TKS, set in the /var/lib/pki/ instance_name /tks/conf/CS.cfg file: TPS configuration The TPS must be configured to recognize the new key set when a supported client attempts to perform an operation on a token. The default defKeySet is used most often. The primary method to determine the keySet in the TPS involves Section 6.7, "Mapping Resolver Configuration" . See the linked section for a discussion of the exact settings needed to establish this resolver mechanism. If the KeySet Mapping Resolver is not present, several fallback methods are available for the TPS to determine the correct keySet : You can add the tps.connector.tks1.keySet=defKeySet to the CS.cfg configuration file of the TPS. Certain clients can possibly be configured to explicitly pass the desired keySet value. However, the Enterprise Security Client does not have this ability at this point. When the TPS calculates the proper keySet based on the desired method, all requests to the TKS to help create secure channels pass the keySet value as well. The TKS can then use its own keySet configuration (described above) to determine how to proceed.
[ "tks.defKeySet._000=## tks.defKeySet._001=## Axalto default key set: tks.defKeySet._002=## tks.defKeySet._003=## tks.defKeySet.mk_mappings.#02#01= <tokenname> : <nickname> tks.defKeySet._004=## tks.defKeySet.auth_key=#40#41#42#43#44#45#46#47#48#49#4a#4b#4c#4d#4e#4f tks.defKeySet.kek_key=#40#41#42#43#44#45#46#47#48#49#4a#4b#4c#4d#4e#4f tks.defKeySet.mac_key=#40#41#42#43#44#45#46#47#48#49#4a#4b#4c#4d#4e#4f tks.defKeySet.nistSP800-108KdfOnKeyVersion=00 tks.defKeySet.nistSP800-108KdfUseCuidAsKdd=false", "tks.jForte._000=## tks.jForte._001=## SAFLink's jForte default key set: tks.jForte._002=## tks.jForte._003=## tks.jForte.mk_mappings.#02#01= <tokenname> : <nickname> tks.jForte._004=## tks.jForte.auth_key=#30#31#32#33#34#35#36#37#38#39#3a#3b#3c#3d#3e#3f tks.jForte.kek_key=#50#51#52#53#54#55#56#57#58#59#5a#5b#5c#5d#5e#5f tks.jForte.mac_key=#40#41#42#43#44#45#46#47#48#49#4a#4b#4c#4d#4e#4f tks.jForte.nistSP800-108KdfOnKeyVersion=00 tks.jForte.nistSP800-108KdfUseCuidAsKdd=false", "tks.defKeySet.prot3.divers=emv tks.defKeySet.prot3.diversVer1Keys=emv tks.defKeySet.prot3.devKeyType=DES3 tks.defKeySet.prot3.masterKeyType=DES3" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/sect-new-key-sets
Preface
Preface As an OpenShift cluster administrator, you can manage the following Red Hat OpenShift AI resources: Users and groups The dashboard interface, including the visibility of navigation menu options Applications that show in the dashboard Custom deployment resources that are related to the Red Hat OpenShift AI Operator, for example, CPU and memory limits and requests Accelerators Distributed workloads Data backup
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_openshift_ai/pr01
Chapter 8. Understanding OpenShift Container Platform development
Chapter 8. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 8.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 8.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 8.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 8.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 8.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI. In the Developer perspective, navigate to the +Add view and in the Developer Catalog tile, view all of the available services in the Developer Catalog. Figure 8.2. Choose S2I base images for apps that need specific runtimes 8.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 8.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 8.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 8.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 8.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.15 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 8.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 8.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 8.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/understanding-development
Chapter 16. Understanding and managing pod security admission
Chapter 16. Understanding and managing pod security admission Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 16.1. About pod security admission OpenShift Container Platform includes Kubernetes pod security admission . Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run. Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits. You can also configure the pod security admission settings at the namespace level. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 16.1.1. Pod security admission modes You can configure the following pod security admission modes for a namespace: Table 16.1. Pod security admission modes Mode Label Description enforce pod-security.kubernetes.io/enforce Rejects a pod from admission if it does not comply with the set profile audit pod-security.kubernetes.io/audit Logs audit events if a pod does not comply with the set profile warn pod-security.kubernetes.io/warn Displays warnings if a pod does not comply with the set profile 16.1.2. Pod security admission profiles You can set each of the pod security admission modes to one of the following profiles: Table 16.2. Pod security admission profiles Profile Description privileged Least restrictive policy; allows for known privilege escalation baseline Minimally restrictive policy; prevents known privilege escalations restricted Most restrictive policy; follows current pod hardening best practices 16.1.3. Privileged namespaces The following system namespaces are always set to the privileged pod security admission profile: default kube-public kube-system You cannot change the pod security profile for these privileged namespaces. 16.1.4. Pod security admission and security context constraints Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies: The security context constraint controller may mutate some security context fields per the pod's assigned SCC. For example, if the seccomp profile is empty or not set and if the pod's assigned SCC enforces seccompProfiles field to be runtime/default , the controller sets the default type to RuntimeDefault . The security context constraint controller validates the pod's security context against the matching SCC. The pod security admission controller validates the pod's security context against the pod security standard assigned to the namespace. 16.2. About pod security admission synchronization In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 16.2.1. Pod security admission synchronization namespace exclusions Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-* prefixed namespaces, but you can enable synchronization on them later. Important If a pod security admission label ( pod-security.kubernetes.io/<mode> ) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label. If necessary, you can enable synchronization again by using one of the following methods: By removing the modified pod security admission label from the namespace By setting the security.openshift.io/scc.podSecurityLabelSync label to true If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten. Permanently disabled namespaces Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- , except for openshift-operators Initially disabled namespaces By default, all namespaces that have an openshift- prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-* namespaces and for the openshift-operators namespace. Note You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators . If an Operator is installed in a user-created openshift-* namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace. 16.3. Controlling pod security admission synchronization You can enable or disable automatic pod security admission synchronization for most namespaces. Important You cannot enable pod security admission synchronization on some system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions . Procedure For each namespace that you want to configure, set a value for the security.openshift.io/scc.podSecurityLabelSync label: To disable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to false . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note Use the --overwrite flag to overwrite the value if this label is already set on the namespace. Additional resources Pod security admission synchronization namespace exclusions 16.4. Configuring pod security admission for a namespace You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use. Procedure For each pod security admission mode that you want to set on a namespace, run the following command: USD oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite 1 Set <namespace> to the namespace to configure. 2 Set <mode> to enforce , warn , or audit . Set <profile> to restricted , baseline , or privileged . 16.5. About pod security admission alerts A PodSecurityViolation alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day. View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted pod security level. For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation. 16.5.1. Identifying pod security violations The PodSecurityViolation alert does not provide details on which workloads are causing pod security violations. You can identify the affected workloads by reviewing the Kubernetes API server audit logs. This procedure uses the must-gather tool to gather the audit logs and then searches for the pod-security.kubernetes.io/audit-violations annotation. Prerequisites You have installed jq . You have access to the cluster as a user with the cluster-admin role. Procedure To gather the audit logs, enter the following command: USD oc adm must-gather -- /usr/bin/gather_audit_logs To output the affected workload details, enter the following command: USD zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name' \ | sort | uniq -c Replace <archive_id> and <image_digest_id> with the actual path names. Example output 1 test-namespace my-pod 16.6. Additional resources Viewing audit logs Managing security context constraints
[ "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true", "oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite", "oc adm must-gather -- /usr/bin/gather_audit_logs", "zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c", "1 test-namespace my-pod" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/understanding-and-managing-pod-security-admission
Chapter 6. Network connections
Chapter 6. Network connections 6.1. Creating outgoing connections To connect to a remote server, pass connection options containing the host and port to the container.connect() method. Example: Creating outgoing connections container.on("connection_open", function (event) { console.log("Connection " + event.connection + " is open"); }); var opts = { host: "example.com", port: 5672 }; container.connect(opts); The default host is localhost . The default port is 5672. For information about creating secure connections, Chapter 7, Security . 6.2. Configuring reconnect Reconnect allows a client to recover from lost connections. It is used to ensure that the components in a distributed system reestablish communication after temporary network or component failures. AMQ JavaScript enables reconnect by default. If a connection attempt fails, the client will try again after a brief delay. The delay increases exponentially for each new attempt, up to a default maximum of 60 seconds. To disable reconnect, set the reconnect connection option to false . Example: Disabling reconnect var opts = { host: "example.com", reconnect: false }; container.connect(opts); To control the delays between connection attempts, set the initial_reconnect_delay and max_reconnect_delay connection options. Delay options are specified in milliseconds. To limit the number of reconnect attempts, set the reconnect_limit option. Example: Configuring reconnect var opts = { host: "example.com", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts); 6.3. Configuring failover AMQ JavaScript allows you to configure alternate connection endpoints programatically. To specify multiple connection endpoints, define a function that returns new connection options and pass the function in the connection_details option. The function is called once for each connection attempt. Example: Configuring failover var hosts = ["alpha.example.com", "beta.example.com"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: "example.com", connection_details: failover_fn } container.connect(opts); This example implements repeating round-robin failover for a list of hosts. You can use this interface to implement your own failover behavior. 6.4. Accepting incoming connections AMQ JavaScript can accept inbound network connections, enabling you to build custom messaging servers. To start listening for connections, use the container.listen() method with options containing the local host address and port to listen on. Example: Accepting incoming connections container.on("connection_open", function (event) { console.log("New incoming connection " + event.connection ); }); var opts = { host: "0.0.0.0", port: 5672 }; container.listen(opts); The special IP address 0.0.0.0 listens on all available IPv4 interfaces. To listen on all IPv6 interfaces, use [::0] . For more information, see the server receive.js example .
[ "container.on(\"connection_open\", function (event) { console.log(\"Connection \" + event.connection + \" is open\"); }); var opts = { host: \"example.com\", port: 5672 }; container.connect(opts);", "var opts = { host: \"example.com\", reconnect: false }; container.connect(opts);", "var opts = { host: \"example.com\", initial_reconnect_delay: 100 , max_reconnect_delay: 60 * 1000 , reconnect_limit: 10 }; container.connect(opts);", "var hosts = [\"alpha.example.com\", \"beta.example.com\"]; var index = -1; function failover_fn() { index += 1; if (index == hosts.length) index = 0; return {host: hosts[index].hostname}; }; var opts = { host: \"example.com\", connection_details: failover_fn } container.connect(opts);", "container.on(\"connection_open\", function (event) { console.log(\"New incoming connection \" + event.connection ); }); var opts = { host: \"0.0.0.0\", port: 5672 }; container.listen(opts);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_javascript_client/network_connections
4.249. python-rhsm
4.249. python-rhsm 4.249.1. RHBA-2011:1696 - python-rhsm bug fix update An updated python-rhsm package that fixes multiple bugs is now available for Red Hat Enterprise Linux 6. The python-rhsm package contains a small library for communicating with the representational state transfer (REST) interface of a Red Hat Unified Entitlement Platform. This interface is used for the management of system entitlements, certificates, and access to content. Bug Fixes BZ# 700601 Prior to this update, the firstboot utility was trying to set an erroneous environment variable LANG=us. As a result, firstboot was unable to start properly. With this update, the C locale is now used as the fallback locale so that the problem does not occur anymore. BZ# 719378 If a user name containing a white space was submitted during the registration process in the Subscription Manager, an incorrect error message was displayed. This problem has been fixed in this update so that the correct error message "Invalid credentials" is now displayed. BZ# 728266 If a subscription was selected in the My Subscriptions tab in the Subscription Manager, and then the Unsubscribe button was pressed, an error occurred. With this update, the problem has been fixed so that the unsubscribe function in the Subscription Manager now works, as expected. BZ# 736166 The /etc/rhsm/ca/candlepin-stage.pem, /etc/rhsm/ca/fakamai-cp1.pem, and /etc/rhsm/ca/redhat-uep.pem certificates have been moved to the updated python-rhsm package so that it is now possible for python-rhsm to register with the hosted Candlepin system. BZ# 746241 If the rhsmcertd daemon received information about an existing update, but no products were installed, an exception occurred. Also, if the virt-who agent attempted to update the guest systems of a host, but there were no guest systems available, an exception occurred. All users of python-rhsm are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-rhsm
Chapter 91. ZipArtifact schema reference
Chapter 91. ZipArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. boolean type Must be zip . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-zipartifact-reference
Chapter 1. Authorization services overview
Chapter 1. Authorization services overview Red Hat Single Sign-On supports fine-grained authorization policies and is able to combine different access control mechanisms such as: Attribute-based access control (ABAC) Role-based access control (RBAC) User-based access control (UBAC) Context-based access control (CBAC) Rule-based access control Using JavaScript Time-based access control Support for custom access control mechanisms (ACMs) through a Service Provider Interface (SPI) Red Hat Single Sign-On is based on a set of administrative UIs and a RESTful API, and provides the necessary means to create permissions for your protected resources and scopes, associate those permissions with authorization policies, and enforce authorization decisions in your applications and services. Resource servers (applications or services serving protected resources) usually rely on some kind of information to decide if access should be granted to a protected resource. For RESTful-based resource servers, that information is usually obtained from a security token, usually sent as a bearer token on every request to the server. For web applications that rely on a session to authenticate users, that information is usually stored in a user's session and retrieved from there for each request. Frequently, resource servers only perform authorization decisions based on role-based access control (RBAC), where the roles granted to the user trying to access protected resources are checked against the roles mapped to these same resources. While roles are very useful and used by applications, they also have a few limitations: Resources and roles are tightly coupled and changes to roles (such as adding, removing, or changing an access context) can impact multiple resources Changes to your security requirements can imply deep changes to application code to reflect these changes Depending on your application size, role management might become difficult and error-prone It is not the most flexible access control mechanism. Roles do not represent who you are and lack contextual information. If you have been granted a role, you have at least some access. Considering that today we need to consider heterogeneous environments where users are distributed across different regions, with different local policies, using different devices, and with a high demand for information sharing, Red Hat Single Sign-On Authorization Services can help you improve the authorization capabilities of your applications and services by providing: Resource protection using fine-grained authorization policies and different access control mechanisms Centralized Resource, Permission, and Policy Management Centralized Policy Decision Point REST security based on a set of REST-based authorization services Authorization workflows and User-Managed Access The infrastructure to help avoid code replication across projects (and redeploys) and quickly adapt to changes in your security requirements. 1.1. Architecture From a design perspective, Authorization Services is based on a well-defined set of authorization patterns providing these capabilities: Policy Administration Point (PAP) Provides a set of UIs based on the Red Hat Single Sign-On Administration Console to manage resource servers, resources, scopes, permissions, and policies. Part of this is also accomplished remotely through the use of the Protection API . Policy Decision Point (PDP) Provides a distributable policy decision point to where authorization requests are sent and policies are evaluated accordingly with the permissions being requested. For more information, see Obtaining Permissions . Policy Enforcement Point (PEP) Provides implementations for different environments to actually enforce authorization decisions at the resource server side. Red Hat Single Sign-On provides some built-in Policy Enforcers . Policy Information Point (PIP) Being based on Red Hat Single Sign-On Authentication Server, you can obtain attributes from identities and runtime environment during the evaluation of authorization policies. 1.1.1. The authorization process Three main processes define the necessary steps to understand how to use Red Hat Single Sign-On to enable fine-grained authorization to your applications: Resource Management Permission and Policy Management Policy Enforcement 1.1.1.1. Resource management Resource Management involves all the necessary steps to define what is being protected. First, you need to specify Red Hat Single Sign-On what are you looking to protect, which usually represents a web application or a set of one or more services. For more information on resource servers see Terminology . Resource servers are managed using the Red Hat Single Sign-On Administration Console. There you can enable any registered client application as a resource server and start managing the resources and scopes you want to protect. A resource can be a web page, a RESTFul resource, a file in your file system, an EJB, and so on. They can represent a group of resources (just like a Class in Java) or they can represent a single and specific resource. For instance, you might have a Bank Account resource that represents all banking accounts and use it to define the authorization policies that are common to all banking accounts. However, you might want to define specific policies for Alice Account (a resource instance that belongs to a customer), where only the owner is allowed to access some information or perform an operation. Resources can be managed using the Red Hat Single Sign-On Administration Console or the Protection API . In the latter case, resource servers are able to manage their resources remotely. Scopes usually represent the actions that can be performed on a resource, but they are not limited to that. You can also use scopes to represent one or more attributes within a resource. 1.1.1.2. Permission and policy management Once you have defined your resource server and all the resources you want to protect, you must set up permissions and policies. This process involves all the necessary steps to actually define the security and access requirements that govern your resources. Policies define the conditions that must be satisfied to access or perform operations on something (resource or scope), but they are not tied to what they are protecting. They are generic and can be reused to build permissions or even more complex policies. For instance, to allow access to a group of resources only for users granted with a role "User Premium", you can use RBAC (Role-based Access Control). Red Hat Single Sign-On provides a few built-in policy types (and their respective policy providers) covering the most common access control mechanisms. You can even create policies based on rules written using JavaScript. Once you have your policies defined, you can start defining your permissions. Permissions are coupled with the resource they are protecting. Here you specify what you want to protect (resource or scope) and the policies that must be satisfied to grant or deny permission. 1.1.1.3. Policy enforcement Policy Enforcement involves the necessary steps to actually enforce authorization decisions to a resource server. This is achieved by enabling a Policy Enforcement Point or PEP at the resource server that is capable of communicating with the authorization server, ask for authorization data and control access to protected resources based on the decisions and permissions returned by the server. Red Hat Single Sign-On provides some built-in Policy Enforcers implementations that you can use to protect your applications depending on the platform they are running on. 1.1.2. Authorization services Authorization services consist of the following RESTFul endpoints: Token Endpoint Resource Management Endpoint Permission Management Endpoint Each of these services provides a specific API covering the different steps involved in the authorization process. 1.1.2.1. Token endpoint OAuth2 clients (such as front end applications) can obtain access tokens from the server using the token endpoint and use these same tokens to access resources protected by a resource server (such as back end services). In the same way, Red Hat Single Sign-On Authorization Services provide extensions to OAuth2 to allow access tokens to be issued based on the processing of all policies associated with the resource(s) or scope(s) being requested. This means that resource servers can enforce access to their protected resources based on the permissions granted by the server and held by an access token. In Red Hat Single Sign-On Authorization Services the access token with permissions is called a Requesting Party Token or RPT for short. Additional resources Obtaining Permissions 1.1.2.2. Protection API The Protection API is a set of UMA-compliant endpoint-providing operations for resource servers to help them manage their resources, scopes, permissions, and policies associated with them. Only resource servers are allowed to access this API, which also requires a uma_protection scope. The operations provided by the Protection API can be organized in two main groups: Resource Management Create Resource Delete Resource Find by Id Query Permission Management Issue Permission Tickets Note By default, Remote Resource Management is enabled. You can change that using the Red Hat Single Sign-On Administration Console and only allow resource management through the console. When using the UMA protocol, the issuance of Permission Tickets by the Protection API is an important part of the whole authorization process. As described in a subsequent section, they represent the permissions being requested by the client and that are sent to the server to obtain a final token with all permissions granted during the evaluation of the permissions and policies associated with the resources and scopes being requested. Additional resources Protection API 1.2. Terminology Before going further, it is important to understand these terms and concepts introduced by Red Hat Single Sign-On Authorization Services. 1.2.1. Resource Server Per OAuth2 terminology, a resource server is the server hosting the protected resources and capable of accepting and responding to protected resource requests. Resource servers usually rely on some kind of information to decide whether access to a protected resource should be granted. For RESTful-based resource servers, that information is usually carried in a security token, typically sent as a bearer token along with every request to the server. Web applications that rely on a session to authenticate users usually store that information in the user's session and retrieve it from there for each request. In Red Hat Single Sign-On, any confidential client application can act as a resource server. This client's resources and their respective scopes are protected and governed by a set of authorization policies. 1.2.2. Resource A resource is part of the assets of an application and the organization. It can be a set of one or more endpoints, a classic web resource such as an HTML page, and so on. In authorization policy terminology, a resource is the object being protected. Every resource has a unique identifier that can represent a single resource or a set of resources. For instance, you can manage a Banking Account Resource that represents and defines a set of authorization policies for all banking accounts. But you can also have a different resource named Alice's Banking Account , which represents a single resource owned by a single customer, which can have its own set of authorization policies. 1.2.3. Scope A resource's scope is a bounded extent of access that is possible to perform on a resource. In authorization policy terminology, a scope is one of the potentially many verbs that can logically apply to a resource. It usually indicates what can be done with a given resource. Example of scopes are view, edit, delete, and so on. However, scope can also be related to specific information provided by a resource. In this case, you can have a project resource and a cost scope, where the cost scope is used to define specific policies and permissions for users to access a project's cost. 1.2.4. Permission Consider this simple and very common permission: A permission associates the object being protected with the policies that must be evaluated to determine whether access is granted. X CAN DO Y ON RESOURCE Z where ... X represents one or more users, roles, or groups, or a combination of them. You can also use claims and context here. Y represents an action to be performed, for example, write, view, and so on. Z represents a protected resource, for example, "/accounts". Red Hat Single Sign-On provides a rich platform for building a range of permission strategies ranging from simple to very complex, rule-based dynamic permissions. It provides flexibility and helps to: Reduce code refactoring and permission management costs Support a more flexible security model, helping you to easily adapt to changes in your security requirements Make changes at runtime; applications are only concerned about the resources and scopes being protected and not how they are protected. 1.2.5. Policy A policy defines the conditions that must be satisfied to grant access to an object. Unlike permissions, you do not specify the object being protected but rather the conditions that must be satisfied for access to a given object (for example, resource, scope, or both). Policies are strongly related to the different access control mechanisms (ACMs) that you can use to protect your resources. With policies, you can implement strategies for attribute-based access control (ABAC), role-based access control (RBAC), context-based access control, or any combination of these. Red Hat Single Sign-On leverages the concept of policies and how you define them by providing the concept of aggregated policies, where you can build a "policy of policies" and still control the behavior of the evaluation. Instead of writing one large policy with all the conditions that must be satisfied for access to a given resource, the policies implementation in Red Hat Single Sign-On Authorization Services follows the divide-and-conquer technique. That is, you can create individual policies, then reuse them with different permissions and build more complex policies by combining individual policies. 1.2.6. Policy provider Policy providers are implementations of specific policy types. Red Hat Single Sign-On provides built-in policies, backed by their corresponding policy providers, and you can create your own policy types to support your specific requirements. Red Hat Single Sign-On provides a SPI (Service Provider Interface) that you can use to plug in your own policy provider implementations. 1.2.7. Permission ticket A permission ticket is a special type of token defined by the User-Managed Access (UMA) specification that provides an opaque structure whose form is determined by the authorization server. This structure represents the resources and/or scopes being requested by a client, the access context, as well as the policies that must be applied to a request for authorization data (requesting party token [RPT]). In UMA, permission tickets are crucial to support person-to-person sharing and also person-to-organization sharing. Using permission tickets for authorization workflows enables a range of scenarios from simple to complex, where resource owners and resource servers have complete control over their resources based on fine-grained policies that govern the access to these resources. In the UMA workflow, permission tickets are issued by the authorization server to a resource server, which returns the permission ticket to the client trying to access a protected resource. Once the client receives the ticket, it can make a request for an RPT (a final token holding authorization data) by sending the ticket back to the authorization server. For more information on permission tickets, see User-Managed Access and the UMA specification.
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/authorization_services_guide/overview
Chapter 11. Updating hardware on nodes running on vSphere
Chapter 11. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 13 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. This version is still fully supported, but support will be removed in a future version of OpenShift Container Platform. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. 11.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. 11.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.22.1 control-plane-node-1 Ready master 75m v1.22.1 control-plane-node-2 Ready master 75m v1.22.1 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 11.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be updated serially. Note Multiple compute nodes can be updated in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.22.1 compute-node-1 Ready worker 30m v1.22.1 compute-node-2 Ready worker 30m v1.22.1 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 11.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 6.7U3 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the VM in the vSphere client. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Convert the VM in the vSphere client from a VM to template. Follow Convert a Virtual Machine to a Template in the vSphere Client in the VMware documentation for more information. Additional resources Understanding how to evacuate pods on nodes 11.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an upgrade prior to performing an upgrade of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform upgrade.
[ "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.22.1 control-plane-node-1 Ready master 75m v1.22.1 control-plane-node-2 Ready master 75m v1.22.1", "oc adm cordon <control_plane_node>", "oc wait --for=condition=Ready node/<control_plane_node>", "oc adm uncordon <control_plane_node>", "oc get nodes -l node-role.kubernetes.io/worker", "NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.22.1 compute-node-1 Ready worker 30m v1.22.1 compute-node-2 Ready worker 30m v1.22.1", "oc adm cordon <compute_node>", "oc adm drain <compute_node> [--pod-selector=<pod_selector>]", "oc wait --for=condition=Ready node/<compute_node>", "oc adm uncordon <compute_node>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/updating_clusters/updating-hardware-on-nodes-running-on-vsphere
13.8. Query Plans
13.8. Query Plans 13.8.1. Query Plans When integrating information using a federated query planner, it is useful to be able to view the query plans that are created, to better understand how information is being accessed and processed, and to troubleshoot problems. A query plan is a set of instructions created by a query engine for executing a command submitted by a user or application. The purpose of the query plan is to execute the user's query in as efficient a way as possible. 13.8.2. Getting a Query Plan You can get a query plan any time you execute a command. The SQL options available are as follows: SET SHOWPLAN [ON|DEBUG]- Returns the processing plan or the plan and the full planner debug log. With the above options, the query plan is available from the Statement object by casting to the org.teiid.jdbc.TeiidStatement interface or by using the "SHOW PLAN" statement. Example 13.13. Retrieving a Query Plan statement.execute("set showplan on"); ResultSet rs = statement.executeQuery("select ..."); TeiidStatement tstatement = statement.unwrap(TeiidStatement.class); PlanNode queryPlan = tstatement.getPlanDescription(); System.out.println(queryPlan); The query plan is made available automatically in several JBoss Data Virtualization tools. 13.8.3. Analyzing a Query Plan Once a query plan has been obtained you will most commonly be looking for: Source pushdown - what parts of the query were pushed to each source? Ensure that any predicates, especially against, indexes are pushed. Join ordering - as federated joins can be quite expensive. They are typically influenced by costing. Join criteria type mismatches. Join algorithm used - merge, enhanced merge, nested loop and so forth. Presence of federated optimizations, such as dependent joins. Join criteria type mismatches. All of these issues presented above will be present subsections of the plan that are specific to relational queries. If you are executing a procedure or generating an XML document, the overall query plan will contain additional information related the surrounding procedural execution. A query plan consists of a set of nodes organized in a tree structure. As with the above example, you will typically be interested in analyzing the textual form of the plan. In a procedural context the ordering of child nodes implies the order of execution. In most other situation, child nodes may be executed in any order even in parallel. Only in specific optimizations, such as dependent join, will the children of a join execute serially. 13.8.4. Relational Plans Relational plans represent the actually processing plan that is composed of nodes that are the basic building blocks of logical relational operations. Physical relational plans differ from logical relational plans in that they will contain additional operations and execution specifics that were chosen by the optimizer. The nodes for a relational query plan are: Access Access a source. A source query is sent to the connection factory associated with the source. [For a dependent join, this node is called Dependent Access.] Dependent Procedure Access Access a stored procedure on a source using multiple sets of input values. Batched Update Processes a set of updates as a batch. Project Defines the columns returned from the node. This does not alter the number of records returned. Project Into Like a normal project, but outputs rows into a target table. Select Select is a criteria evaluation filter node (WHERE / HAVING). When there is a subquery in the criteria, this node is called Dependent Select. Insert Plan Execution Similar to a project into, but executes a plan rather than a source query. Typically created when executing an insert into view with a query expression. Window Function Project Like a normal project, but includes window functions. Join Defines the join type, join criteria, and join strategy (merge or nested loop). Union All There are no properties for this node; it just passes rows through from its children. Depending upon other factors, such as if there is a transaction or the source query concurrency allowed, not all of the union children will execute in parallel. Sort Defines the columns to sort on, the sort direction for each column, and whether to remove duplicates or not. Dup Remove Removes duplicate rows. The processing uses a tree structure to detect duplicates so that results will effectively stream at the cost of IO operations. Grouping Groups sets of rows into groups and evaluates aggregate functions. Null A node that produces no rows. Usually replaces a Select node where the criteria is always false (and whatever tree is underneath). There are no properties for this node. Plan Execution Executes another sub plan. Typically the sub plan will be a non-relational plan. Dependent Procedure Execution Executes a sub plan using multiple sets of input values. Limit Returns a specified number of rows, then stops processing. Also processes an offset if present. XML Table Evaluates XMLTABLE. The debug plan will contain more information about the XQuery/XPath with regards to their optimization - see the XQuery section below or XQuery Optimization. Text Table Evaluates TEXTTABLE Array Table Evaluates ARRAYTABLE Object Table Evaluates OBJECTTABLE 13.8.5. Relational Plans: Node Statistics Every node has a set of statistics that are output. These can be used to determine the amount of data flowing through the node. Before execution a processor plan will not contain node statistics. Also the statistics are updated as the plan is processed, so typically you will want the final statistics after all rows have been processed by the client. Table 13.1. Node Statistics Statistic Description Units Node Output Rows Number of records output from the node count Node Batch Process Time Time processing in this node only millisec Node Cumulative Process Time Elapsed time from beginning of processing to end millisec Node Cumulative Batch Process Time Time processing in this node + child nodes millisec Node Batch Calls Number of times a node was called for processing count Node Blocks Number of times a blocked exception was thrown by this node or a child count In addition to node statistics, some nodes display cost estimates computed at the node. Table 13.2. Node Cost Estimates Cost Estimates Description Units Estimated Node Cardinality Estimated number of records that will be output from the node; -1 if unknown count The root node will display additional information. 13.8.6. Source Hints Table 13.3. Registry Properties Top level Statistics Description Units Data Bytes Sent The size of the serialized data result (row and lob values) sent to the client bytes The query processor plan can be obtained in a plain text or xml format. The plan text format is typically easier to read, while the xml format is easier to process by tooling. When possible tooling should be used to examine the plans as the tree structures can be deeply nested. Data flows from the leafs of the tree to the root. Sub plans for procedure execution can be shown inline, and are differentiated by different indentation. Given a user query of "SELECT pm1.g1.e1, pm1.g2.e2, pm1.g3.e3 from pm1.g1 inner join (pm1.g2 left outer join pm1.g3 on pm1.g2.e1=pm1.g3.e1) on pm1.g1.e1=pm1.g3.e1" the text for a processor plan that does not push down the joins would look like: Note that the nested join node is using a merge join and expects the source queries from each side to produce the expected ordering for the join. The parent join is an enhanced sort join which can delay the decision to perform sorting based upon the incoming rows. Note that the outer join from the user query has been modified to an inner join since none of the null inner values can be present in the query result. The same plan in xml form looks like this: Note that the same information appears in each of the plan forms. In some cases it can actually be easier to follow the simplified format of the debug plan final processor plan. From the Debug Log the same plan as above would appear as: These are the node properties: Common Output Columns - what columns make up the tuples returned by this node Data Bytes Sent - how many data byte, not including messaging overhead, were sent by this query Planning Time - the amount of time in milliseconds spent planning the query Relational Relational Node ID - matches the node ids seen in the debug log Node(id) Criteria - the boolean expression used for filtering Select Columns - the columns that define the projection Grouping Columns - the columns used for grouping Query - the source query Model Name - the model name Sharing ID - nodes sharing the same source results will have the same sharing id Dependent Join - if a dependent join is being used Join Strategy - the join strategy (Nested Loop, Sort Merge, Enhanced Sort, etc.) Join Type - the join type (Left Outer Join, Inner Join, Cross Join) Join Criteria - the join predicates Execution Plan - the nested execution plan Into Target - the insertion target Sort Columns - the columns for sorting Sort Mode - if the sort performs another function as well, such as distinct removal Rollup - if the group by has the rollup option Statistics - the processing statistics Cost Estimates - the cost/cardinality estimates including dependent join cost estimates Row Offset - the row offset expression Row Limit - the row limit expression With - the with clause Window Functions - the window functions being computed Table Function - the table function (XMLTABLE, OBJECTTABLE, TEXTTABLE, etc.) XML Message Tag Namespace Data Column Namespace Declarations Optional Flag Default Value Recursion Direction Bindings Is Staging Flag Source In Memory Flag Condition Default Program Encoding Formatted Flag Procedure Expression Result Set Program Variable Then Else XML document model queries and procedure execution (including instead of triggers) use intermediate and final plan forms that include relational plans. Generally the structure of the xml/procedure plans will closely match their logical forms. It is the nested relational plans that will be of interest when analyzing performance issues. 13.8.7. Statistics Gathering and Single Partitions The statistics-gathering feature in the Red Hat JBoss Data Virtualization engine does not take partition statistics into account. For most queries, using the global statistics will not provide accurate results for a single partition. Currently, there is a manual approach that will require modeling each partition as a table. Here is an example: The statistics can be updated using Teiid Designer, by setting the cardinality on the table or, alternatively you can use the System procedure:
[ "statement.execute(\"set showplan on\"); ResultSet rs = statement.executeQuery(\"select ...\"); TeiidStatement tstatement = statement.unwrap(TeiidStatement.class); PlanNode queryPlan = tstatement.getPlanDescription(); System.out.println(queryPlan);", "ProjectNode + Output Columns: 0: e1 (string) 1: e2 (integer) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: JoinNode + Output Columns: 0: e1 (string) 1: e2 (integer) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: JoinNode + Output Columns: 0: e1 (string) 1: e1 (string) 2: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Child 0: AccessNode + Output Columns:e1 (string) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0 + Model Name:pm1 + Child 1: AccessNode + Output Columns: 0: e1 (string) 1: e3 (boolean) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0 + Model Name:pm1 + Join Strategy:MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED) + Join Type:INNER JOIN + Join Criteria:pm1.g1.e1=pm1.g3.e1 + Child 1: AccessNode + Output Columns: 0: e1 (string) 1: e2 (integer) + Cost Estimates:Estimated Node Cardinality: -1.0 + Query:SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0 + Model Name:pm1 + Join Strategy:ENHANCED SORT JOIN (SORT/ALREADY_SORTED) + Join Type:INNER JOIN + Join Criteria:pm1.g3.e1=pm1.g2.e1 + Select Columns: 0: pm1.g1.e1 1: pm1.g2.e2 2: pm1.g3.e3", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <node name=\"ProjectNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> <value>e2 (integer)</value> <value>e3 (boolean)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Child 0\"> <node name=\"JoinNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> <value>e2 (integer)</value> <value>e3 (boolean)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Child 0\"> <node name=\"JoinNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> <value>e1 (string)</value> <value>e3 (boolean)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Child 0\"> <node name=\"AccessNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Query\"> <value>SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0</value> </property> <property name=\"Model Name\"> <value>pm1</value> </property> </node> </property> <property name=\"Child 1\"> <node name=\"AccessNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> <value>e3 (boolean)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Query\"> <value>SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0</value> </property> <property name=\"Model Name\"> <value>pm1</value> </property> </node> </property> <property name=\"Join Strategy\"> <value>MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED)</value> </property> <property name=\"Join Type\"> <value>INNER JOIN</value> </property> <property name=\"Join Criteria\"> <value>pm1.g1.e1=pm1.g3.e1</value> </property> </node> </property> <property name=\"Child 1\"> <node name=\"AccessNode\"> <property name=\"Output Columns\"> <value>e1 (string)</value> <value>e2 (integer)</value> </property> <property name=\"Cost Estimates\"> <value>Estimated Node Cardinality: -1.0</value> </property> <property name=\"Query\"> <value>SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0</value> </property> <property name=\"Model Name\"> <value>pm1</value> </property> </node> </property> <property name=\"Join Strategy\"> <value>ENHANCED SORT JOIN (SORT/ALREADY_SORTED)</value> </property> <property name=\"Join Type\"> <value>INNER JOIN</value> </property> <property name=\"Join Criteria\"> <value>pm1.g3.e1=pm1.g2.e1</value> </property> </node> </property> <property name=\"Select Columns\"> <value>pm1.g1.e1</value> <value>pm1.g2.e2</value> <value>pm1.g3.e3</value> </property> </node>", "OPTIMIZATION COMPLETE: PROCESSOR PLAN: ProjectNode(0) output=[pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] [pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] JoinNode(1) [ENHANCED SORT JOIN (SORT/ALREADY_SORTED)] [INNER JOIN] criteria=[pm1.g3.e1=pm1.g2.e1] output=[pm1.g1.e1, pm1.g2.e2, pm1.g3.e3] JoinNode(2) [MERGE JOIN (ALREADY_SORTED/ALREADY_SORTED)] [INNER JOIN] criteria=[pm1.g1.e1=pm1.g3.e1] output=[pm1.g3.e1, pm1.g1.e1, pm1.g3.e3] AccessNode(3) output=[pm1.g1.e1] SELECT g_0.e1 AS c_0 FROM pm1.g1 AS g_0 ORDER BY c_0 AccessNode(4) output=[pm1.g3.e1, pm1.g3.e3] SELECT g_0.e1 AS c_0, g_0.e3 AS c_1 FROM pm1.g3 AS g_0 ORDER BY c_0 AccessNode(5) output=[pm1.g2.e1, pm1.g2.e2] SELECT g_0.e1 AS c_0, g_0.e2 AS c_1 FROM pm1.g2 AS g_0 ORDER BY c_0", "CREATE FOREIGN TABLE q1 (id integer primary key, company varchar(10), order_date timestamp) OPTIONS (NAMEINSOURCE 'dvqe_order_partitioned partition(dvqe_order_partitioned_q1, CARDINALITY '20000')'); CREATE FOREIGN TABLE q2 (id integer primary key, company varchar(10), order_date timestamp) OPTIONS (NAMEINSOURCE 'dvqe_order_partitioned partition(dvqe_order_partitioned_q2, CARDINALITY '10000')'); CREATE FOREIGN TABLE q3 (id integer primary key, company varchar(10), order_date timestamp) OPTIONS (NAMEINSOURCE 'dvqe_order_partitioned partition(dvqe_order_partitioned_q3, CARDINALITY '1000')'); CREATE FOREIGN TABLE q4 (id integer primary key, company varchar(10), order_date timestamp) OPTIONS (NAMEINSOURCE 'dvqe_order_partitioned partition(dvqe_order_partitioned_q4, CARDINALITY '3000000')'); CREATE VIEW orders (id integer primary key, company varchar(10), order_date timestamp) AS SELECT * FROM q1 UNION SELECT * FROM q2 UNION SELECT * FROM q3 UNION SELECT * FROM q4;", "SYSADMIN.setTableStats(IN tableName string NOT NULL, IN cardinality long NOT NULL)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-Query_Plans
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can Review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures on the troubleshooting page for OpenShift Logging issues. Check the following to resolve logging issues: Status of the Logging Operator . Status of the Log store . OpenShift Logging alerts . Information about your OpenShift logging environment using oc adm must-gather command . OpenShift CLI (oc) issues : Investigate OpenShift CLI (oc) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/support/support-overview
Chapter 3. Updating GitOps ZTP
Chapter 3. Updating GitOps ZTP You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. Important Using PolicyGenTemplate CRs to manage and deploy polices to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching 3.1. Overview of the GitOps ZTP update process You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 3.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenerator or PolicyGentemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenerator or PolicyGentemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 3.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 3.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 3.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Note The following procedure assumes you are using PolicyGenerator resources instead of PolicyGentemplate resources for cluster policies management. Make required changes to PolicyGenerator files: All PolicyGenerator files must be created in a Namespace prefixed with ztp . This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenerator CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── acmpolicygenerator │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or {policy-gen-cr} CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenerator kustomization file must contain all PolicyGenerator YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - acm-common-ranGen.yaml - acm-group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - acm-group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and {policy-gen-cr} trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenerator CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 3.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Modify the out/argocd/deployment/argocd-openshift-gitops-patch.json file with the multicluster-operators-subscription image that matches your RHACM version: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment 3.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for GitOps ZTP .
[ "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "├── acmpolicygenerator │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - acm-common-ranGen.yaml - acm-group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - acm-group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/edge_computing/ztp-updating-gitops
Chapter 1. Support policy for Red Hat build of OpenJDK
Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration..
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/rn-openjdk-support-policy
Chapter 3. Predefined User Access roles
Chapter 3. Predefined User Access roles The following table lists the predefined roles provided with User Access. Some of the predefined roles are included in the Default access group, which includes all authenticated users in your organization. Only the Organization Administrator users in your organization inherit the roles in the Default admin access group. Because this group is provided by Red Hat, it is updated automatically when Red Hat assigns roles to the Default admin access group. For more information about viewing predefined roles, see Chapter 2, Procedures for configuring User Access . NOTE Predefined roles are updated and modified by Red Hat and cannot be modified. The table might not contain all currently available predefined roles. Table 3.1. Predefined roles provided with User Access Role name Description Default access group Default admin access group Compliance administrator A Compliance role that grants full access to any Compliance resource. X Compliance viewer A Compliance role that grants read access to any Compliance resource. X RHEL Advisor administrator Perform any available operation against any RHEL Advisor resource. X Inventory Groups Administrator Be able to read and edit Inventory Groups data. X Inventory Groups Viewer Be able to read Inventory Groups data. Inventory Hosts Administrator Be able to read and edit Inventory Hosts data. X X Inventory Hosts Viewer Be able to read Inventory Hosts data. Inventory administrator Perform any available operation against any Inventory resource. Malware detection administrator Perform any available operation against any malware-detection resource. X Malware detection viewer Read any malware-detection resource. Notifications administrator Perform any available operation against Notifications and Integrations applications. X Notifications viewer Read only access to notifications and integrations applications. Patch administrator Perform any available operation against any Patch resource. X Patch viewer Read any Patch resource. X Policies administrator Perform any available operation against any Policies resource. X Policies viewer Perform read only operation against any Policies resource. X Remediations administrator Perform any available operation against any Remediations resource Remediations user Perform create, view, update, delete operations against any Remediations resource. X Resource Optimization administrator Perform any available operation against any Resource Optimization resource. X Resource Optimization user A Resource Optimization user role that grants read only permission. X Tasks administrator Perform any available operation against any Tasks resource. X User Access administrator Grants a non-org admin full access to configure and manage user access to services hosted on console.redhat.com. This role can only be viewed and assigned by Organization Administrators. User Access principal viewer Grants a non-org admin read access to principals within user access. Vulnerability administrator Perform any available operation against any Vulnerability resource. X Vulnerability viewer Read any Vulnerability resource. X
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/user_access_configuration_guide_for_role-based_access_control_rbac_with_fedramp/assembly-insights-rbac-reference_user-access-configuration
Chapter 1. Installation methods
Chapter 1. Installation methods You can install a cluster on IBM Power(R) infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power(R) : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision. Installing a cluster on IBM Power(R) in a restricted network : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power/preparing-to-install-on-ibm-power
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About the CLI Guide This guide is for engineers, consultants, and others who want to use the Migration Toolkit for Applications (MTA) to migrate Java applications, .NET applications, or other components. In MTA 7.1 and later, you can use MTA to analyze applications written in languages other than Java. To run analysis on applications written in languages other than Java, add a custom rule set and do not specify a target language. This guide describes how to install and run the CLI, review the generated reports, and take advantage of additional features. Important .NET migration is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Important Analyzing applications written in a language other than Java is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. 1.2. About the Migration Toolkit for Applications What is the Migration Toolkit for Applications? Migration Toolkit for Applications (MTA) accelerates large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. In MTA 7.1 and later, when you add an application to the Application Inventory , MTA automatically creates and executes language and technology discovery tasks. Language discovery identifies the programming languages used in the application. Technology discovery identifies technologies, such as Enterprise Java Beans (EJB), Spring, etc. Then, each task assigns appropriate tags to the application, reducing the time and effort you spend manually tagging the application. MTA uses an extensive default questionnaire as the basis for assessing your applications, or you can create your own custom questionnaire, enabling you to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment as the basis for discussions between stakeholders to determine which applications are good candidates for containerization, which require significant work first, and which are not suitable for containerization. MTA analyzes applications by applying one or more rulesets to each application considered to determine which specific lines of that application must be modified before it can be modernized. MTA examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. How does the Migration Toolkit for Applications simplify migration? The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. 1.2.1. Supported Migration Toolkit for Applications migration paths The Migration Toolkit for Applications (MTA) supports the following migrations: Migrating from third-party enterprise application servers, such as Oracle WebLogic Server, to JBoss Enterprise Application Platform (JBoss EAP). Upgrading to the latest release of JBoss EAP. Migrating from a Windows-only .NET 4.5+ Framework to cross-platform .NET 8.0. (Developer Preview) MTA provides a comprehensive set of rules to assess the suitability of your applications for containerization and deployment on Red Hat OpenShift Container Platform (RHOCP). You can run an MTA analysis to assess your applications' suitability for migration to multiple target platforms. Table 1.1. Supported Java migration paths: Source platform ⇒ Target platform Source platform ⇒ Migration to JBoss EAP 7 & 8 OpenShift (cloud readiness) OpenJDK 11, 17, and 21 Jakarta EE 9 Camel 3 & 4 Spring Boot in Red Hat Runtimes Quarkus Open Liberty Oracle WebLogic Server ✔ ✔ ✔ - - - - - IBM WebSphere Application Server ✔ ✔ ✔ - - - - ✔ JBoss EAP 4 ✘ [a] ✔ ✔ - - - - - JBoss EAP 5 ✔ ✔ ✔ - - - - - JBoss EAP 6 ✔ ✔ ✔ - - - - - JBoss EAP 7 ✔ ✔ ✔ - - - ✔ - Thorntail ✔ [b] - - - - - - - Oracle JDK - ✔ ✔ - - - - - Camel 2 - ✔ ✔ - ✔ - - - Spring Boot - ✔ ✔ ✔ - ✔ ✔ - Any Java application - ✔ ✔ - - - - - Any Java EE application - - - ✔ - - - - [a] Although MTA does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7. [b] Requires JBoss Enterprise Application Platform expansion pack 2 (EAP XP 2) .NET migration paths: Source platform ⇒ Target platform (Developer Preview) Source platform ⇒ OpenShift (cloud readiness) Migration to .NET 8.0 .NET Framework 4.5+ (Windows only) ✔ ✔ For more information about use cases and migration paths, see the MTA for developers web page. 1.3. The MTA CLI The CLI is a command-line tool in the Migration Toolkit for Applications that you can use to assess and prioritize migration and modernization efforts for applications. It provides numerous reports that highlight the analysis without using the other tools. The CLI includes a wide array of customization options. By using the CLI, you can tune MTA analysis options or integrate with external automation tools.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/introduction
40.2. Configuring OProfile
40.2. Configuring OProfile Before OProfile can be run, it must be configured. At a minimum, selecting to monitor the kernel (or selecting not to monitor the kernel) is required. The following sections describe how to use the opcontrol utility to configure OProfile. As the opcontrol commands are executed, the setup options are saved to the /root/.oprofile/daemonrc file. 40.2.1. Specifying the Kernel First, configure whether OProfile should monitor the kernel. This is the only configuration option that is required before starting OProfile. All others are optional. To monitor the kernel, execute the following command as root: Note The debuginfo package must be installed (which contains the uncompressed kernel) in order to monitor the kernel. To configure OProfile not to monitor the kernel, execute the following command as root: This command also loads the oprofile kernel module, if it is not already loaded, and creates the /dev/oprofile/ directory, if it does not already exist. Refer to Section 40.6, "Understanding /dev/oprofile/ " for details about this directory. Note Even if OProfile is configured not to profile the kernel, the SMP kernel still must be running so that the oprofile module can be loaded from it. Setting whether samples should be collected within the kernel only changes what data is collected, not how or where the collected data is stored. To generate different sample files for the kernel and application libraries, refer to Section 40.2.3, "Separating Kernel and User-space Profiles" .
[ "opcontrol --setup --vmlinux=/usr/lib/debug/lib/modules/`uname -r`/vmlinux", "opcontrol --setup --no-vmlinux" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/oprofile-configuring_oprofile
function::user_uint8
function::user_uint8 Name function::user_uint8 - Retrieves an unsigned 8-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the unsigned 8-bit integer from Description Returns the unsigned 8-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_uint8:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-uint8
Chapter 9. Performing basic overcloud administration tasks
Chapter 9. Performing basic overcloud administration tasks This chapter contains information about basic tasks you might need to perform during the lifecycle of your overcloud. 9.1. Accessing overcloud nodes through SSH You can access each overcloud node through the SSH protocol. Each overcloud node contains a tripleo-admin user, formerly known as the heat-admin user. The stack user on the undercloud has key-based SSH access to the tripleo-admin user on each overcloud node. All overcloud nodes have a short hostname that the undercloud resolves to an IP address on the control plane network. Each short hostname uses a .ctlplane suffix. For example, the short name for overcloud-controller-0 is overcloud-controller-0.ctlplane Prerequisites A deployed overcloud with a working control plane network. Procedure Log in to the undercloud as the stack user. Find the name of the node that you want to access: Connect to the node as the tripleo-admin user: 9.2. Managing containerized services Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services. Listing containers and images To list running containers, run the following command: To include stopped or failed containers in the command output, add the --all option to the command: To list container images, run the following command: Inspecting container properties To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command: Managing containers with Systemd services versions of OpenStack Platform managed containers with Docker and its daemon. Now, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container. Note It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead. To check a container status, run the systemctl status command: To stop a container, run the systemctl stop command: To start a container, run the systemctl start command: To restart a container, run the systemctl restart command: Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations: Clean exit code or signal, such as running podman stop command. Unclean exit code, such as the podman container crashing after a start. Unclean signals. Timeout if the container takes more than 1m 30s to start. For more information about Systemd services, see the systemd.service documentation . Note Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/ . For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart. Monitoring podman containers with podman healthcheck You can use the podman healthcheck command to check the health of a RHOSP service container: Example: Health checks are not configured for every service container. If you run the podman healthcheck command on a container that does not have a health check defined, you receive an error to indicate that the container has no defined health check. Checking container logs Red Hat OpenStack Platform 17.1 logs all standard output (stdout) from all containers, and standard errors (stderr) consolidated inone single file for each container in /var/log/containers/stdout . The host also applies log rotation to this directory, which prevents huge files and disk space issues. In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID. You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command: Accessing containers To enter the shell for a containerized service, use the podman exec command to launch /bin/bash . For example, to enter the shell for the keystone container, run the following command: To enter the shell for the keystone container as the root user, run the following command: To exit the container, run the following command: 9.3. Modifying the overcloud environment You can modify the overcloud to add additional features or alter existing operations. Procedure To modify the overcloud, make modifications to your custom environment files and heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 7.3, "Configuring and deploying the overcloud" , rerun the following command: Director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. Director does not recreate the overcloud, but rather changes the existing overcloud. Important Removing parameters from custom environment files does not revert the parameter value to the default configuration. You must identify the default value from the core heat template collection in /usr/share/openstack-tripleo-heat-templates and set the value in your custom environment file manually. If you want to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example: This command includes the new parameters and resources from the environment file into the stack. Important It is not advisable to make manual modifications to the overcloud configuration because director might overwrite these modifications later. 9.4. Importing virtual machines into the overcloud You can migrate virtual machines from an existing OpenStack environment to your Red Hat OpenStack Platform (RHOSP) environment. Procedure On the existing OpenStack environment, create a new image by taking a snapshot of a running server and download the image: Replace <instance_name> with the name of the instance. Replace <image_name> with the name of the new image. Replace <exported_vm.qcow2> with the name of the exported virtual machine. Copy the exported image to the undercloud node: Log in to the undercloud as the stack user. Source the overcloudrc credentials file: Upload the exported image into the overcloud: Launch a new instance: Important You can use these commands to copy each virtual machine disk from the existing OpenStack environment to the new Red Hat OpenStack Platform. QCOW snapshots lose their original layering system. 9.5. Launching the ephemeral heat process In versions of Red Hat OpenStack Platform (RHOSP) a system-installed Heat process was used to install the overcloud. Now, we use ephemeral Heat to install the overcloud meaning that the heat-api and heat-engine processes are started on demand by the deployment , update , and upgrade commands. Previously, you used the openstack stack command to create and manage stacks. This command is no longer available by default. For troubleshooting and debugging purposes, for example if the stack should fail, you must first launch the ephemeral Heat process to use the openstack stack commands. Use the openstack tripleo launch heat command to enable ephemeral heat outside of a deployment. Procedure Launch the ephemeral Heat process: Replace <overcloud> with the name of your overcloud stack. Note The command exits after launching the Heat process, and the Heat process continues to run in the background as a Podman pod. Verify that the ephemeral-heat process is running: Export the OS_CLOUD environment: List the installed stacks: You can debug with commands such as openstack stack environment show and openstack stack resource list . After you have finished debugging, stop the ephemeral Heat process: Note Sometimes, exporting the heat environment fails. This can happen when other credentials, such as overcloudrc , are in use. In this case unset the existing environment and source the heat environment. 9.6. Running the dynamic inventory script You can run Ansible-based automation in your Red Hat OpenStack Platform (RHOSP) environment. Use the tripleo-ansible-inventory.yaml inventory file located in the /home/stack/overcloud-deploy/<stack> directory to run ansible plays or ad-hoc commands. Note If you want to run an Ansible playbook or an Ansible ad-hoc command on the undercloud, you must use the /home/stack/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml inventory file. Procedure To view your inventory of nodes, run the following Ansible ad-hoc command: Note Replace stack with the name of your deployed overcloud stack. To execute Ansible playbooks on your environment, run the ansible command and include the full path to inventory file using the -i option. For example: Replace <hosts> with the type of hosts that you want to use to use: controller for all Controller nodes compute for all Compute nodes overcloud for all overcloud child nodes. For example, controller and compute nodes "*" for all nodes Replace <options> with additional Ansible options. Use the --ssh-extra-args='-o StrictHostKeyChecking=no' option to bypass confirmation on host key checking. Use the -u [USER] option to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using the ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this parameter. Use the -m [MODULE] option to use a specific Ansible module. The default is command , which executes Linux commands. Use the -a [MODULE_ARGS] option to define arguments for the chosen module. Important Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes. 9.7. Removing an overcloud stack You can delete an overcloud stack and unprovision all the stack nodes. Note Deleting your overcloud stack does not erase all the overcloud data. If you need to erase all the overcloud data, contact Red Hat support. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Retrieve a list of all the nodes in your stack and their current status: Delete the overcloud stack and unprovision the nodes and networks: Replace <node_definition_file> with the name of your node definition file, for example, overcloud-baremetal-deploy.yaml . Replace <networks_definition_file> with the name of your networks definition file, for example, network_data_v2.yaml . Replace <stack> with the name of the stack that you want to delete. If not specified, the default stack is overcloud . Confirm that you want to delete the overcloud: Wait for the overcloud to delete and the nodes and networks to unprovision. Confirm that the bare-metal nodes have been unprovisioned: Remove the stack directories: Note The directory paths for your stack might be different from the default if you used the --output-dir and --working-dir options when deploying the overcloud with the openstack overcloud deploy command. 9.8. Managing local disk partition sizes If your local disk partitions continue to fill up after you have optimized the configuration of your partition sizes, then perform one of the following tasks: Manually delete files from the affected partitions. Add a new physical disk and add it to the LVM volume group. For more information, see Configuring and managing logical volumes . Overprovision the partition to use the remaining spare disk space. This option is possible because the default whole disk overcloud image, overcloud-hardened-uefi-full.qcow2 , is backed by a thin pool. For more information on thin-provisioned logical volumes, see Creating and managing thin provisioned volumes (thin volumes) in the RHEL Configuring and managing local volumes guide. Warning Only use overprovisioning when it is not possible to manually delete files or add a new physical disk. Overprovisioning can fail during the write operation if there is insufficient free physical space. Note Adding a new disk and overprovisioning the partition require a support exception. Contact the Red Hat Customer Experience and Engagement team to discuss a support exception, if applicable, or other options.
[ "(undercloud)USD metalsmith list", "(undercloud)USD ssh [email protected]", "sudo podman ps", "sudo podman ps --all", "sudo podman images", "sudo podman inspect keystone", "sudo systemctl status tripleo_keystone ● tripleo_keystone.service - keystone container Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago Main PID: 29012 (podman) CGroup: /system.slice/tripleo_keystone.service └─29012 /usr/bin/podman start -a keystone", "sudo systemctl stop tripleo_keystone", "sudo systemctl start tripleo_keystone", "sudo systemctl restart tripleo_keystone", "sudo podman healthcheck run keystone", "sudo podman logs keystone", "sudo podman exec -it keystone /bin/bash", "sudo podman exec --user 0 -it <NAME OR ID> /bin/bash", "exit", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/overcloud-baremetal-deployed.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --ntp-server pool.ntp.org", "source ~/stackrc (undercloud) USD openstack overcloud deploy --templates -e ~/templates/new-environment.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/overcloud-baremetal-deployed.yaml --ntp-server pool.ntp.org", "openstack server image create --name <image_name> <instance_name> openstack image save --file <exported_vm.qcow2> <image_name>", "scp exported_vm.qcow2 [email protected]:~/.", "source ~/overcloudrc", "(overcloud) USD openstack image create --disk-format qcow2 -file <exported_vm.qcow2> --container-format bare <image_name>", "(overcloud) USD openstack server create --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id <instance_name>", "(undercloud)USD openstack tripleo launch heat --heat-dir /home/stack/overcloud-deploy/<overcloud>/heat-launcher --restore-db", "(undercloud)USD sudo podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 958b141609b2 ephemeral-heat Running 2 minutes ago 44447995dbcf 3", "(undercloud)USD export OS_CLOUD=heat", "(undercloud)USD openstack stack list +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | ID | Stack Name | Project | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+ | 761e2a54-c6f9-4e0f-abe6-c8e0ad51a76c | overcloud | admin | CREATE_COMPLETE | 2022-08-29T20:48:37Z | None | +--------------------------------------+------------+---------+-----------------+----------------------+--------------+", "(undercloud)USD openstack tripleo launch heat --kill", "(overcloud)USD unset OS_CLOUD (overcloud)USD unset OS_PROJECT_NAME (overcloud)USD unset OS_PROJECT_DOMAIN_NAME (overcloud)USD unset OS_USER_DOMAIN_NAME (overcloud)USD OS_AUTH_TYPE=none (overcloud)USD OS_ENDPOINT=http://127.0.0.1:8006/v1/admin (overcloud)USD export OS_CLOUD=heat", "(undercloud) USD ansible -i ./overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml all --list", "(undercloud) USD ansible <hosts> -i ./overcloud-deploy/tripleo-ansible-inventory.yaml <playbook> <options>", "source ~/stackrc", "(undercloud)USD openstack baremetal node list +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | 059fb1a1-53ea-4060-9a47-09813de28ea1 | power on | active | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | e73a4b50-9579-4fe1-bd1a-556a2c8b504f | power on | active | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | 6d69e48d-10b4-45dd-9776-155a9b8ad575 | power on | active | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | 1f836ac0-a70d-4025-88a3-bbe0583b4b8e | power on | active | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | e2edd028-cea6-4a98-955e-5c392d91ed46 | power on | active | False | +--------------------------------------+--------------+--------------------------------------+-------------+--------------------+-------------+", "(undercloud)USD openstack overcloud delete -b <node_definition_file> --networks-file <networks_definition_file> --network-ports <stack>", "Are you sure you want to delete this overcloud [y/N]?", "(undercloud) [stack@undercloud-0 ~]USD openstack baremetal node list +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+ | 92ae71b0-3c31-4ebb-b467-6b5f6b0caac7 | compute-0 | None | power off | available | False | | 9d6f955e-3d98-4d1a-9611-468761cebabf | compute-1 | None | power off | available | False | | 8a686fc1-1381-4238-9bf3-3fb16eaec6ab | controller-0 | None | power off | available | False | | eb8083cc-5f8f-405f-9b0c-14b772ce4534 | controller-1 | None | power off | available | False | | a6750f1f-8901-41d6-b9f1-f5d6a10a76c7 | controller-2 | None | power off | available | False | +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+", "rm -rf ~/overcloud-deploy/<stack> rm -rf ~/config-download/<stack>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_performing-basic-overcloud-administration-tasks
Chapter 3. Performing maintenance on the undercloud and overcloud with Instance HA
Chapter 3. Performing maintenance on the undercloud and overcloud with Instance HA To perform maintenance on the undercloud and overcloud, you must shut down and start up the undercloud and overcloud nodes in a specific order to ensure minimal issues when your start your overcloud. You can also perform maintenance on a specific Compute or Controller node by stopping the node and disabling the Pacemaker resources on the node. 3.1. Prerequisites A running undercloud and overcloud with Instance HA enabled. 3.2. Undercloud and overcloud shutdown order To shut down the Red Hat OpenStack Platform environment, you must shut down the overcloud and undercloud in the following order: Shut down instances on overcloud Compute nodes Shut down Compute nodes Stop all high availability and OpenStack Platform services on Controller nodes Shut down Ceph Storage nodes Shut down Controller nodes Shut down the undercloud 3.2.1. Shutting down instances on overcloud Compute nodes As a part of shutting down the Red Hat OpenStack Platform environment, shut down all instances on Compute nodes before shutting down the Compute nodes. Prerequisites An overcloud with active Compute services Procedure Log in to the undercloud as the stack user. Source the credentials file for your overcloud: View running instances in the overcloud: Stop each instance in the overcloud: Repeat this step for each instance until you stop all instances in the overcloud. 3.2.2. Stopping instance HA services on overcloud Compute nodes As a part of shutting down the Red Hat OpenStack Platform environment, you must shut down all Instance HA services that run on Compute nodes before shutting down the Compute nodes. Prerequisites An overcloud with active Compute services Instance HA is enabled on Compute nodes Procedure Log in as the root user to an overcloud node that runs Pacemaker. Disable the Pacemaker remote resource on each Compute node: Identify the Pacemaker Remote resource on Compute nodes: These resources use the ocf::pacemaker:remote agent and are usually named after the Compute node host format, such as overcloud-novacomputeiha-0 . Disable each Pacemaker Remote resource. The following example shows how to disable the resource for overcloud-novacomputeiha-0 : Disable the Compute node STONITH devices: Identify the Compute node STONITH devices: Disable each Compute node STONITH device: 3.2.3. Shutting down Compute nodes As a part of shutting down the Red Hat OpenStack Platform environment, log in to and shut down each Compute node. Prerequisites Shut down all instances on the Compute nodes Procedure Log in as the root user to a Compute node. Shut down the node: Perform these steps for each Compute node until you shut down all Compute nodes. 3.2.4. Stopping services on Controller nodes As a part of shutting down the Red Hat OpenStack Platform environment, stop services on the Controller nodes before shutting down the nodes. This includes Pacemaker and systemd services. Prerequisites An overcloud with active Pacemaker services Procedure Log in as the root user to a Controller node. Stop the Pacemaker cluster. This command stops the cluster on all nodes. Wait until the Pacemaker services stop and check that the services stopped. Check the Pacemaker status: Check that no Pacemaker services are running in Podman: Stop the Red Hat OpenStack Platform services: Wait until the services stop and check that services are no longer running in Podman: 3.2.5. Shutting down Ceph Storage nodes As a part of shutting down the Red Hat OpenStack Platform environment, disable Ceph Storage services then log in to and shut down each Ceph Storage node. Prerequisites A healthy Ceph Storage cluster Ceph MON services are running on standalone Ceph MON nodes or on Controller nodes Procedure Log in as the root user to a node that runs Ceph MON services, such as a Controller node or a standalone Ceph MON node. Check the health of the cluster. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure that the status is HEALTH_OK . Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags for the cluster. In the following example, the podman commands set these flags through a Ceph MON container on a Controller node: Shut down each Ceph Storage node: Log in as the root user to a Ceph Storage node. Shut down the node: Perform these steps for each Ceph Storage node until you shut down all Ceph Storage nodes. Shut down any standalone Ceph MON nodes: Log in as the root user to a standalone Ceph MON node. Shut down the node: Perform these steps for each standalone Ceph MON node until you shut down all standalone Ceph MON nodes. Additional resources "What is the procedure to shutdown and bring up the entire ceph cluster?" 3.2.6. Shutting down Controller nodes As a part of shutting down the Red Hat OpenStack Platform environment, log in to and shut down each Controller node. Prerequisites Stop the Pacemaker cluster Stop all Red Hat OpenStack Platform services on the Controller nodes Procedure Log in as the root user to a Controller node. Shut down the node: Perform these steps for each Controller node until you shut down all Controller nodes. 3.2.7. Shutting down the undercloud As a part of shutting down the Red Hat OpenStack Platform environment, log in to the undercloud node and shut down the undercloud. Prerequisites A running undercloud Procedure Log in to the undercloud as the stack user. Shut down the undercloud: 3.3. Performing system maintenance After you completely shut down the undercloud and overcloud, perform any maintenance to the systems in your environment and then start up the undercloud and overcloud. 3.4. Undercloud and overcloud startup order To start the Red Hat OpenStack Platform environment, you must start the undercloud and overcloud in the following order: Start the undercloud. Start Controller nodes. Start Ceph Storage nodes. Start Compute nodes. Start instances on overcloud Compute nodes. 3.4.1. Starting the undercloud As a part of starting the Red Hat OpenStack Platform environment, power on the undercloud node, log in to the undercloud, and check the undercloud services. Prerequisites The undercloud is powered down. Procedure Power on the undercloud and wait until the undercloud boots. Verification Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Check the services on the undercloud: Validate the static inventory file named tripleo-ansible-inventory.yaml : Replace <inventory_file> with the name and location of the Ansible inventory file, for example, ~/tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml . Note When you run a validation, the Reasons column in the output is limited to 79 characters. To view the validation result in full, view the validation log files. Check that all services and containers are active and healthy: Additional resources Using the validation framework in Customizing your Red Hat OpenStack Platform deployment 3.4.2. Starting Controller nodes As a part of starting the Red Hat OpenStack Platform environment, power on each Controller node and check the non-Pacemaker services on the node. Prerequisites The Controller nodes are powered down. Procedure Power on each Controller node. Verification Log in to each Controller node as the root user. Check the services on the Controller node: Only non-Pacemaker based services are running. Wait until the Pacemaker services start and check that the services started: Note If your environment uses Instance HA, the Pacemaker resources do not start until you start the Compute nodes or perform a manual unfence operation with the pcs stonith confirm <compute_node> command. You must run this command on each Compute node that uses Instance HA. 3.4.3. Starting Ceph Storage nodes As a part of starting the Red Hat OpenStack Platform environment, power on the Ceph MON and Ceph Storage nodes and enable Ceph Storage services. Prerequisites A powered down Ceph Storage cluster Ceph MON services are enabled on powered down standalone Ceph MON nodes or on powered on Controller nodes Procedure If your environment has standalone Ceph MON nodes, power on each Ceph MON node. Power on each Ceph Storage node. Log in as the root user to a node that runs Ceph MON services, such as a Controller node or a standalone Ceph MON node. Check the status of the cluster nodes. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure that each node is powered on and connected. Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags for the cluster. In the following example, the podman commands unset these flags through a Ceph MON container on a Controller node: Verification Check the health of the cluster. In the following example, the podman command runs a status check within a Ceph MON container on a Controller node: Ensure the status is HEALTH_OK . Additional resources "What is the procedure to shutdown and bring up the entire ceph cluster?" 3.4.4. Starting Compute nodes As a part of starting the Red Hat OpenStack Platform environment, power on each Compute node and check the services on the node. Prerequisites Powered down Compute nodes Procedure Power on each Compute node. Verification Log in to each Compute as the root user. Check the services on the Compute node: 3.4.5. Starting instance HA services on overcloud Compute nodes As a part of starting the Red Hat OpenStack Platform environment, start all Instance HA services on the Compute nodes. Prerequisites An overcloud with running Compute nodes Instance HA is enabled on Compute nodes Procedure Log in as the root user to an overcloud node that runs Pacemaker. Enable the STONITH device for a Compute node: Identify the Compute node STONITH device: Clear any STONITH errors for the Compute node: This command returns the node to a clean STONITH state. Enable the Compute node STONITH device: Perform these steps for each Compute node with STONITH. Enable the Pacemaker remote resource on each Compute node: Identify the Pacemaker remote resources on Compute nodes: These resources use the ocf::pacemaker:remote agent and are usually named after the Compute node host format, such as overcloud-novacomputeiha-0 . Enable each Pacemaker Remote resource. The following example shows how to enable the resource for overcloud-novacomputeiha-0 : Perform these steps for each Compute node with Pacemaker remote management. Wait until the Pacemaker services start and check that the services started: If any Pacemaker resources fail to start during the startup process, reset the status and the fail count of the resource: Note Some services might require more time to start, such as fence_compute and fence_kdump . 3.4.6. Starting instances on overcloud Compute nodes As a part of starting the Red Hat OpenStack Platform environment, start the instances on on Compute nodes. Prerequisites An active overcloud with active nodes Procedure Log in to the undercloud as the stack user. Source the credentials file for your overcloud: View running instances in the overcloud: Start an instance in the overcloud:
[ "source ~/overcloudrc", "openstack server list --all-projects", "openstack server stop <INSTANCE>", "pcs resource status", "pcs resource disable overcloud-novacomputeiha-0", "pcs stonith status", "pcs stonith disable <STONITH_DEVICE>", "shutdown -h now", "pcs cluster stop --all", "pcs status", "podman ps --filter \"name=.*-bundle.*\"", "systemctl stop 'tripleo_*'", "podman ps", "sudo podman exec -it ceph-mon-controller-0 ceph status", "sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norecover sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance sudo podman exec -it ceph-mon-controller-0 ceph osd set nobackfill sudo podman exec -it ceph-mon-controller-0 ceph osd set nodown sudo podman exec -it ceph-mon-controller-0 ceph osd set pause", "shutdown -h now", "shutdown -h now", "shutdown -h now", "sudo shutdown -h now", "source ~/stackrc", "systemctl list-units 'tripleo_*'", "validation run --group pre-introspection -i <inventory_file>", "validation run --validation service-status --limit undercloud -i <inventory_file>", "systemctl -t service", "pcs status", "sudo podman exec -it ceph-mon-controller-0 ceph status", "sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout sudo podman exec -it ceph-mon-controller-0 ceph osd unset norecover sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance sudo podman exec -it ceph-mon-controller-0 ceph osd unset nobackfill sudo podman exec -it ceph-mon-controller-0 ceph osd unset nodown sudo podman exec -it ceph-mon-controller-0 ceph osd unset pause", "sudo podman exec -it ceph-mon-controller-0 ceph status", "systemctl -t service", "pcs stonith status", "pcs stonith confirm <COMPUTE_NODE>", "pcs stonith enable <STONITH_DEVICE>", "pcs resource status", "pcs resource enable overcloud-novacomputeiha-0", "pcs status", "pcs resource cleanup", "source ~/overcloudrc", "openstack server list --all-projects", "openstack server start <INSTANCE>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_high_availability_for_instances/assembly-performing-maintenance-undercloud-overcloud-with-instanceha_rhosp
8.63. hdparm
8.63. hdparm 8.63.1. RHBA-2013:1580 - hdparm bug fix and enhancement update Updated hdparm packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Hdparm is a useful system utility for setting (E)IDE hard drive parameters. For example, hdparm can be used to tweak hard drive performance and to spin down hard drives for power conservation. Note The hdparm packages have been upgraded to upstream version 9.43, which provides a number of bug fixes and enhancements over the version. These enhancements include creating files with the desired size on the ext4 and xfs filesystems, and a possibility to specify the offset for reading operations when measuring timing performance. Other notable enhancements include an ability to obtain and set the "idle3" timeout value of the Western Digital Green (WDG) hard drive, and an ability to obtain and set the Write-Read-Verify feature for hard drives. (BZ# 977800 ) Bug Fixes BZ# 639623 Previously, the hdparm utility did not assume that some disk information could be unavailable. As a consequence, hdparm could terminate unexpectedly with no useful output. With this update, proper checks for unsuccessful disk queries have been added, and hdparm now terminates with a more detailed error message. BZ# 735887 Prior to this update, the hdparm utility did not assume that some disk information could be unavailable when the user requested information about how much disk space a file occupied. Consequently, hdparm terminated unexpectedly with no useful output in such a scenario. With this update, proper checks for unsuccessful disk queries have been added. As a result, hdparm now terminates with an error message providing detailed information. BZ# 807056 Previously, the hdparm utility retrieved the hard drive identification data in a way that could cause errors. As a consequence, hdparm failed to obtain the data on some occasions and displayed an unhelpful error message. With this update, the respective system call has been replaced with one that is more appropriate and robust. As a result, the hard drive identification data is now successfully obtained and printed in the output. BZ# 862257 When the hdparm utility is unable to obtain the necessary geometry information about a hard drive, it attempts to download firmware. Previously, due to incorrect control statements, hdparm could terminate unexpectedly with a segmentation fault at such a download attempt. With this update, control statements checking for system call failures have been added. As a result, if hdparm cannot operate on a drive, it displays an error message and exits cleanly. Users of hdparm are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/hdparm
Chapter 8. Reference
Chapter 8. Reference 8.1. Custom Resource configuration reference A Custom Resource Definition (CRD) is a schema of configuration items for a custom OpenShift object deployed with an Operator. By deploying a corresponding Custom Resource (CR) instance, you specify values for configuration items shown in the CRD. The following sub-sections detail the configuration items that you can set in Custom Resource instances based on the main broker CRD. 8.1.1. Broker Custom Resource configuration reference A CR instance based on the main broker CRD enables you to configure brokers for deployment in an OpenShift project. The following table describes the items that you can configure in the CR instance. Important Configuration items marked with an asterisk ( * ) are required in any corresponding Custom Resource (CR) that you deploy. If you do not explicitly specify a value for a non-required item, the configuration uses the default value. Entry Sub-entry Description and usage adminUser* Administrator user name required for connecting to the broker and management console. If you do not specify a value, the value is automatically generated and stored in a secret. The default secret name has a format of <custom_resource_name> -credentials-secret . For example, my-broker-deployment-credentials-secret . Type : string Example : my-user Default value : Automatically-generated, random value adminPassword* Administrator password required for connecting to the broker and management console. If you do not specify a value, the value is automatically generated and stored in a secret. The default secret name has a format of <custom_resource_name> -credentials-secret . For example, my-broker-deployment-credentials-secret . Type : string Example : my-password Default value : Automatically-generated, random value deploymentPlan* Broker deployment configuration image* Full path of the broker container image used for each broker in the deployment. You do not need to explicitly specify a value for image in your CR. The default value of placeholder indicates that the Operator has not yet determined the appropriate image to use. To learn how the Operator chooses a broker container image to use, see Section 2.4, "How the Operator chooses container images" . Type : string Example : registry.redhat.io/amq7/amq-broker-rhel8@sha256:982ba18be1ac285722bc0ca8e85d2a42b8b844ab840b01425e79e3eeee6ee5b9 Default value : placeholder size* Number of broker Pods to create in the deployment. If you specify a value of 2 or greater, your broker deployment is clustered by default. The cluster user name and password are automatically generated and stored in the same secret as adminUser and adminPassword , by default. Type : int Example : 1 Default value : 2 requireLogin Specify whether login credentials are required to connect to the broker. Type : Boolean Example : false Default value : true persistenceEnabled Specify whether to use journal storage for each broker Pod in the deployment. If set to true , each broker Pod requires an available Persistent Volume (PV) that the Operator can claim using a Persistent Volume Claim (PVC). Type : Boolean Example : false Default value : true initImage Init Container image used to configure the broker. You do not need to explicitly specify a value for initImage in your CR, unless you want to provide a custom image. To learn how the Operator chooses a built-in Init Container image to use, see Section 2.4, "How the Operator chooses container images" . To learn how to specify a custom Init Container image, see Section 4.7, "Specifying a custom Init Container image" . Type : string Example : registry.redhat.io/amq7/amq-broker-init-rhel8@sha256:f37f98c809c6f29a83e3d5a3ac4494e28efe9b25d33c54f533c6a08662244622 Default value : Not specified journalType Specify whether to use asynchronous I/O (AIO) or non-blocking I/O (NIO). Type : string Example : aio Default value : nio messageMigration When a broker Pod shuts down due to an intentional scaledown of the broker deployment, specify whether to migrate messages to another broker Pod that is still running in the broker cluster. Type : Boolean Example : false Default value : true resources.limits.cpu Maximum amount of host-node CPU, in millicores, that each broker container running in a Pod in a deployment can consume. Type : string Example : "500m" Default value : Uses the same default value that your version of OpenShift Container Platform uses. Consult a cluster administrator. resources.limits.memory Maximum amount of host-node memory, in bytes, that each broker container running in a Pod in a deployment can consume. Supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). Type : string Example : "1024M" Default value : Uses the same default value that your version of OpenShift Container Platform uses. Consult a cluster administrator. resources.requests.cpu Amount of host-node CPU, in millicores, that each broker container running in a Pod in a deployment explicitly requests. Type : string Example : "250m" Default value : Uses the same default value that your version of OpenShift Container Platform uses. Consult a cluster administrator. resources.requests.memory Amount of host-node memory, in bytes, that each broker container running in a Pod in a deployment explicitly requests. Supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). Type : string Example : "512M" Default value : Uses the same default value that your version of OpenShift Container Platform uses. Consult a cluster administrator. storage.size Size, in bytes, of the Persistent Volume Claim (PVC) that each broker in a deployment requires for persistent storage. This property applies only when persistenceEnabled is set to true . The value that you specify must include a unit. Supports byte notation (for example, K, M, G), or the binary equivalents (Ki, Mi, Gi). Type : string Example : 4Gi Default value : 2Gi jolokiaAgentEnabled Specifies whether the Jolokia JVM Agent is enabled for the brokers in the deployment. If the value of this property is set to true , Fuse Console can discover and display runtime data for the brokers. Type : Boolean Example : true Default value : false managementRBACEnabled Specifies whether role-based access control (RBAC) is enabled for the brokers in the deployment. To use Fuse Console, you must set the value to false , because Fuse Console uses its own role-based access control. Type : Boolean Example : false Default value : true affinity Specifies scheduling constraints for pods. For information about affinity properties, see the properties in the OpenShift Container Platform documentation. tolerations Specifies the pod's tolerations. For information about tolerations properties, see the properties in the OpenShift Container Platform documentation. nodeSelector Specify a label that matches a node's labels for the pod to be scheduled on that node. storageClassName Specifies the name of the storage class to use for the Persistent Volume Claim (PVC). Storage classes provide a way for administrators to describe and classify the available storage. For example, a storage class might have specific quality-of-service levels, backup policies, or other administrative policies associated with it. Type : string Example : gp3 Default value : Not specified livenessProbe Configures a periodic health check on a running broker container to check that the broker is running. For information about liveness probe properties, see the properties in the OpenShift Container Platform documentation. readinessProbe Configures a periodic health check on a running broker container to check that the broker is accepting network traffic. For information about readiness probe properties, see the properties in the OpenShift Container Platform documentation. labels Assign labels to a broker pod. Type : string Example : location: "production" Default value : Not specified console Configuration of broker management console. expose Specify whether to expose the management console port for each broker in a deployment. Type : Boolean Example : true Default value : false sslEnabled Specify whether to use SSL on the management console port. Type : Boolean Example : true Default value : false sslSecret Secret where broker key store, trust store, and their corresponding passwords (all Base64-encoded) are stored. If you do not specify a value for sslSecret , the console uses a default secret name. The default secret name is in the form of <custom_resource_name> -console-secret . This property applies only when the sslEnabled property is set to true . Type : string Example : my-broker-deployment-console-secret Default value : Not specified podSecurity.serviceAccountName Specify a service account name for the broker pod. Type : string Example : activemq-artemis-controller-manager Default value : default podSecurityContext Specify the following pod-level security attributes and common container settings. * fsGroup * fsGroupChangePolicy * runAsGroup * runAsUser * runAsNonRoot * seLinuxOptions * seccompProfile * supplementalGroups * sysctls * windowsOptions For information on podSecurityContext properties, see the properties in the OpenShift Container Platform documentation. useClientAuth Specify whether the management console requires client authorization. Type : Boolean Example : true Default value : false acceptors.acceptor A single acceptor configuration instance. name* Name of acceptor. Type : string Example : my-acceptor Default value : Not applicable port Port number to use for the acceptor instance. Type : int Example : 5672 Default value : 61626 for the first acceptor that you define. The default value then increments by 10 for every subsequent acceptor that you define. protocols Messaging protocols to be enabled on the acceptor instance. Type : string Example : amqp,core Default value : all sslEnabled Specify whether SSL is enabled on the acceptor port. If set to true , look in the secret name specified in sslSecret for the credentials required by TLS/SSL. Type : Boolean Example : true Default value : false sslSecret Secret where broker key store, trust store, and their corresponding passwords (all Base64-encoded) are stored. If you do not specify a custom secret name for sslSecret , the acceptor assumes a default secret name. The default secret name has a format of <custom_resource_name> - <acceptor_name> -secret . You must always create this secret yourself, even when the acceptor assumes a default name. Type : string Example : my-broker-deployment-my-acceptor-secret Default value : <custom_resource_name> - <acceptor_name> -secret enabledCipherSuites Comma-separated list of cipher suites to use for TLS/SSL communication. Specify the most secure cipher suite(s) supported by your client application. If you use a comma-separated list to specify a set of cipher suites that is common to both the broker and the client, or you do not specify any cipher suites, the broker and client mutually negotiate a cipher suite to use. If you do not know which cipher suites to specify, it is recommended that you first establish a broker-client connection with your client running in debug mode, to verify the cipher suites that are common to both the broker and the client. Then, configure enabledCipherSuites on the broker. Type : string Default value : Not specified keyStoreProvider The name of the provider of the keystore that the broker uses. Type : string Example : SunJCE Default value : Not specified trustStoreProvider The name of the provider of the truststore that the broker uses. Type : string Example : SunJCE Default value : Not specified trustStoreType The type of truststore that the broker uses. Type : string Example : JCEKS Default value : JKS enabledProtocols Comma-separated list of protocols to use for TLS/SSL communication. Type : string Example : TLSv1,TLSv1.1,TLSv1.2 Default value : Not specified needClientAuth Specify whether the broker informs clients that two-way TLS is required on the acceptor. This property overrides wantClientAuth . Type : Boolean Example : true Default value : Not specified wantClientAuth Specify whether the broker informs clients that two-way TLS is requested on the acceptor, but not required. This property is overridden by needClientAuth . Type : Boolean Example : true Default value : Not specified verifyHost Specify whether to compare the Common Name (CN) of a client's certificate to its host name, to verify that they match. This option applies only when two-way TLS is used. Type : Boolean Example : true Default value : Not specified sslProvider Specify whether the SSL provider is JDK or OPENSSL. Type : string Example : OPENSSL Default value : JDK sniHost Regular expression to match against the server_name extension on incoming connections. If the names don't match, connection to the acceptor is rejected. Type : string Example : some_regular_expression Default value : Not specified expose Specify whether to expose the acceptor to clients outside OpenShift Container Platform. When you expose an acceptor to clients outside OpenShift, the Operator automatically creates a dedicated Service and Route for each broker Pod in the deployment. Type : Boolean Example : true Default value : false anycastPrefix Prefix used by a client to specify that the anycast routing type should be used. Type : string Example : jms.queue Default value : Not specified multicastPrefix Prefix used by a client to specify that the multicast routing type should be used. Type : string Example : /topic/ Default value : Not specified connectionsAllowed Number of connections allowed on the acceptor. When this limit is reached, a DEBUG message is issued to the log, and the connection is refused. The type of client in use determines what happens when the connection is refused. Type : integer Example : 2 Default value : 0 (unlimited connections) amqpMinLargeMessageSize Minimum message size, in bytes, required for the broker to handle an AMQP message as a large message. If the size of an AMQP message is equal or greater to this value, the broker stores the message in a large messages directory ( /opt/ <custom_resource_name> /data/large-messages , by default) on the persistent volume (PV) used by the broker for message storage. Setting the value to -1 disables large message handling for AMQP messages. Type : integer Example : 204800 Default value : 102400 (100 KB) BindToAllInterfaces If set to true, configures the broker acceptors with a 0.0.0.0 IP address instead of the internal IP address of the pod. When the broker acceptors have a 0.0.0.0 IP address, they bind to all interfaces configured for the pod and clients can direct traffic to the broker by using OpenShift Container Platform port-forwarding. Normally, you use this configuration to debug a service. For more information about port-forwarding, see Using port-forwarding to access applications in a container in the OpenShift Container Platform documentation. Note If port-forwarding is used incorrectly, it can create a security risk for your environment. Where possible, Red Hat recommends that you do not use port-forwarding in a production environment. Type : Boolean Example : true Default value : false connectors.connector A single connector configuration instance. name* Name of connector. Type : string Example : my-connector Default value : Not applicable type The type of connector to create; tcp or vm . Type : string Example : vm Default value : tcp host* Host name or IP address to connect to. Type : string Example : 192.168.0.58 Default value : Not specified port* Port number to be used for the connector instance. Type : int Example : 22222 Default value : Not specified sslEnabled Specify whether SSL is enabled on the connector port. If set to true , look in the secret name specified in sslSecret for the credentials required by TLS/SSL. Type : Boolean Example : true Default value : false sslSecret Secret where broker key store, trust store, and their corresponding passwords (all Base64-encoded) are stored. If you do not specify a custom secret name for sslSecret , the connector assumes a default secret name. The default secret name has a format of <custom_resource_name> - <connector_name> -secret . You must always create this secret yourself, even when the connector assumes a default name. Type : string Example : my-broker-deployment-my-connector-secret Default value : <custom_resource_name> - <connector_name> -secret enabledCipherSuites Comma-separated list of cipher suites to use for TLS/SSL communication. Type : string NOTE : For a connector, it is recommended that you do not specify a list of cipher suites. Default value : Not specified keyStoreProvider The name of the provider of the keystore that the broker uses. Type : string Example : SunJCE Default value : Not specified trustStoreProvider The name of the provider of the truststore that the broker uses. Type : string Example : SunJCE Default value : Not specified trustStoreType The type of truststore that the broker uses. Type : string Example : JCEKS Default value : JKS enabledProtocols Comma-separated list of protocols to use for TLS/SSL communication. Type : string Example : TLSv1,TLSv1.1,TLSv1.2 Default value : Not specified needClientAuth Specify whether the broker informs clients that two-way TLS is required on the connector. This property overrides wantClientAuth . Type : Boolean Example : true Default value : Not specified wantClientAuth Specify whether the broker informs clients that two-way TLS is requested on the connector, but not required. This property is overridden by needClientAuth . Type : Boolean Example : true Default value : Not specified verifyHost Specify whether to compare the Common Name (CN) of client's certificate to its host name, to verify that they match. This option applies only when two-way TLS is used. Type : Boolean Example : true Default value : Not specified sslProvider Specify whether the SSL provider is JDK or OPENSSL . Type : string Example : OPENSSL Default value : JDK sniHost Regular expression to match against the server_name extension on outgoing connections. If the names don't match, the connector connection is rejected. Type : string Example : some_regular_expression Default value : Not specified expose Specify whether to expose the connector to clients outside OpenShift Container Platform. Type : Boolean Example : true Default value : false addressSettings.applyRule Specifies how the Operator applies the configuration that you add to the CR for each matching address or set of addresses. The values that you can specify are: merge_all For address settings specified in both the CR and the default configuration that match the same address or set of addresses: Replace any property values specified in the default configuration with those specified in the CR. Keep any property values that are specified uniquely in the CR or the default configuration. Include each of these in the final, merged configuration. For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. merge_replace For address settings specified in both the CR and the default configuration that match the same address or set of addresses, include the settings specified in the CR in the final, merged configuration. Do not include any properties specified in the default configuration, even if these are not specified in the CR. + For address settings specified in either the CR or the default configuration that uniquely match a particular address or set of addresses, include these in the final, merged configuration. replace_all Replace all address settings specified in the default configuration with those specified in the CR. The final, megred configuration corresponds exactly to that specified in the CR. Type : string Example : replace_all Default value : merge_all addressSettings.addressSetting Address settings for a matching address or set of addresses. addressFullPolicy Specify what happens when an address configured with maxSizeBytes becomes full. The available policies are: PAGE Messages sent to a full address are paged to disk. DROP Messages sent to a full address are silently dropped. FAIL Messages sent to a full address are dropped and the message producers receive an exception. BLOCK Message producers will block when they try to send any further messages. The BLOCK policy works only for AMQP, OpenWire, and Core Protocol, because those protocols support flow control. Type : string Example : DROP Default value : PAGE autoCreateAddresses Specify whether the broker automatically creates an address when a client sends a message to, or attempts to consume a message from, a queue that is bound to an address that does not exist. Type : Boolean Example : false Default value : true autoCreateDeadLetterResources Specify whether the broker automatically creates a dead letter address and queue to receive undelivered messages. If the parameter is set to true , the broker automatically creates a dead letter address and an associated dead letter queue. The name of the automatically-created address matches the value that you specify for deadLetterAddress . Type : Boolean Example : true Default value : false autoCreateExpiryResources Specify whether the broker automatically creates an address and queue to receive expired messages. If the parameter is set to true , the broker automatically creates an expiry address and an associated expiry queue. The name of the automatically-created address matches the value that you specify for expiryAddress . Type : Boolean Example : true Default value : false autoCreateJmsQueues This property is deprecated. Use autoCreateQueues instead. autoCreateJmsTopics This property is deprecated. Use autoCreateQueues instead. autoCreateQueues Specify whether the broker automatically creates a queue when a client sends a message to, or attempts to consume a message from, a queue that does not yet exist. Type : Boolean Example : false Default value : true autoDeleteAddresses Specify whether the broker automatically deletes automatically-created addresses when the broker no longer has any queues. Type : Boolean Example : false Default value : true autoDeleteAddressDelay Time, in milliseconds, that the broker waits before automatically deleting an automatically-created address when the address has no queues. Type : integer Example : 100 Default value : 0 autoDeleteJmsQueues This property is deprecated. Use autoDeleteQueues instead. autoDeleteJmsTopics This property is deprecated. Use autoDeleteQueues instead. autoDeleteQueues Specify whether the broker automatically deletes an automatically-created queue when the queue has no consumers and no messages. Type : Boolean Example : false Default value : true autoDeleteCreatedQueues Specify whether the broker automatically deletes a manually-created queue when the queue has no consumers and no messages. Type : Boolean Example : true Default value : false autoDeleteQueuesDelay Time, in milliseconds, that the broker waits before automatically deleting an automatically-created queue when the queue has no consumers. Type : integer Example : 10 Default value : 0 autoDeleteQueuesMessageCount Maximum number of messages that can be in a queue before the broker evaluates whether the queue can be automatically deleted. Type : integer Example : 5 Default value : 0 configDeleteAddresses When the configuration file is reloaded, this parameter specifies how to handle an address (and its queues) that has been deleted from the configuration file. You can specify the following values: OFF The broker does not delete the address when the configuration file is reloaded. FORCE The broker deletes the address and its queues when the configuration file is reloaded. If there are any messages in the queues, they are removed also. Type : string Example : FORCE Default value : OFF configDeleteQueues When the configuration file is reloaded, this setting specifies how the broker handles queues that have been deleted from the configuration file. You can specify the following values: OFF The broker does not delete the queue when the configuration file is reloaded. FORCE The broker deletes the queue when the configuration file is reloaded. If there are any messages in the queue, they are removed also. Type : string Example : FORCE Default value : OFF deadLetterAddress The address to which the broker sends dead (that is, undelivered ) messages. Type : string Example : DLA Default value : None deadLetterQueuePrefix Prefix that the broker applies to the name of an automatically-created dead letter queue. Type : string Example : myDLQ. Default value : DLQ. deadLetterQueueSuffix Suffix that the broker applies to an automatically-created dead letter queue. Type : string Example : .DLQ Default value : None defaultAddressRoutingType Routing type used on automatically-created addresses. Type : string Example : ANYCAST Default value : MULTICAST defaultConsumersBeforeDispatch Number of consumers needed before message dispatch can begin for queues on an address. Type : integer Example : 5 Default value : 0 defaultConsumerWindowSize Default window size, in bytes, for a consumer. Type : integer Example : 300000 Default value : 1048576 (1024*1024) defaultDelayBeforeDispatch Default time, in milliseconds, that the broker waits before dispatching messages if the value specified for defaultConsumersBeforeDispatch has not been reached. Type : integer Example : 5 Default value : -1 (no delay) defaultExclusiveQueue Specifies whether all queues on an address are exclusive queues by default. Type : Boolean Example : true Default value : false defaultGroupBuckets Number of buckets to use for message grouping. Type : integer Example : 0 (message grouping disabled) Default value : -1 (no limit) defaultGroupFirstKey Key used to indicate to a consumer which message in a group is first. Type : string Example : firstMessageKey Default value : None defaultGroupRebalance Specifies whether to rebalance groups when a new consumer connects to the broker. Type : Boolean Example : true Default value : false defaultGroupRebalancePauseDispatch Specifies whether to pause message dispatch while the broker is rebalancing groups. Type : Boolean Example : true Default value : false defaultLastValueQueue Specifies whether all queues on an address are last value queues by default. Type : Boolean Example : true Default value : false defaultLastValueKey Default key to use for a last value queue. Type : string Example : stock_ticker Default value : None defaultMaxConsumers Maximum number of consumers allowed on a queue at any time. Type : integer Example : 100 Default value : -1 (no limit) defaultNonDestructive Specifies whether all queues on an address are non-destructive by default. Type : Boolean Example : true Default value : false defaultPurgeOnNoConsumers Specifies whether the broker purges the contents of a queue once there are no consumers. Type : Boolean Example : true Default value : false defaultQueueRoutingType Routing type used on automatically-created queues. The default value is MULTICAST . Type : string Example : ANYCAST Default value : MULTICAST defaultRingSize Default ring size for a matching queue that does not have a ring size explicitly set. Type : integer Example : 3 Default value : -1 (no size limit) enableMetrics Specifies whether a configured metrics plugin such as the Prometheus plugin collects metrics for a matching address or set of addresses. Type : Boolean Example : false Default value : true expiryAddress Address that receives expired messages. Type : string Example : myExpiryAddress Default value : None expiryDelay Expiration time, in milliseconds, applied to messages that are using the default expiration time. Type : integer Example : 100 Default value : -1 (no expiration time applied) expiryQueuePrefix Prefix that the broker applies to the name of an automatically-created expiry queue. Type : string Example : myExp. Default value : EXP. expiryQueueSuffix Suffix that the broker applies to the name of an automatically-created expiry queue. Type : string Example : .EXP Default value : None lastValueQueue Specify whether a queue uses only last values or not. Type : Boolean Example : true Default value : false managementBrowsePageSize Specify how many messages a management resource can browse. Type : integer Example : 100 Default value : 200 match * String that matches address settings to addresses configured on the broker. You can specify an exact address name or use a wildcard expression to match the address settings to a set of addresses. If you use a wildcard expression as a value for the match property, you must enclose the value in single quotation marks, for example, 'myAddresses*' . Type : string Example : 'myAddresses*' Default value : None maxDeliveryAttempts Specifies how many times the broker attempts to deliver a message before sending the message to the configured dead letter address. Type : integer Example : 20 Default value : 10 maxExpiryDelay Expiration time, in milliseconds, applied to messages that are using an expiration time greater than this value. Type : integer Example : 20 Default value : -1 (no maximum expiration time applied) maxRedeliveryDelay Maximum value, in milliseconds, between message redelivery attempts made by the broker. Type : integer Example : 100 Default value : The default value is ten times the value of redeliveryDelay , which has a default value of 0 . maxSizeBytes Maximum memory size, in bytes, for an address. Used when addressFullPolicy is set to PAGING , BLOCK , or FAIL . Also supports byte notation such as "K", "Mb", and "GB". Type : string Example : 10Mb Default value : -1 (no limit) maxSizeBytesRejectThreshold Maximum size, in bytes, that an address can reach before the broker begins to reject messages. Used when the address-full-policy is set to BLOCK . Works in combination with maxSizeBytes for the AMQP protocol only. Type : integer Example : 500 Default value : -1 (no maximum size) messageCounterHistoryDayLimit Number of days for which a broker keeps a message counter history for an address. Type : integer Example : 5 Default value : 0 minExpiryDelay Expiration time, in milliseconds, applied to messages that are using an expiration time lower than this value. Type : integer Example : 20 Default value : -1 (no minimum expiration time applied) pageMaxCacheSize Number of page files to keep in memory to optimize I/O during paging navigation. Type : integer Example : 10 Default value : 5 pageSizeBytes Paging size in bytes. Also supports byte notation such as K , Mb , and GB . Type : string Example : 20971520 Default value : 10485760 (approximately 10.5 MB) redeliveryDelay Time, in milliseconds, that the broker waits before redelivering a cancelled message. Type : integer Example : 100 Default value : 0 redeliveryDelayMultiplier Multiplying factor to apply to the value of redeliveryDelay . Type : number Example : 5 Default value : 1 redeliveryCollisionAvoidanceFactor Multiplying factor to apply to the value of redeliveryDelay to avoid collisions. Type : number Example : 1.1 Default value : 0 redistributionDelay Time, in milliseconds, that the broker waits after the last consumer is closed on a queue before redistributing any remaining messages. Type : integer Example : 100 Default value : -1 (not set) retroactiveMessageCount Number of messages to keep for future queues created on an address. Type : integer Example : 100 Default value : 0 sendToDlaOnNoRoute Specify whether a message will be sent to the configured dead letter address if it cannot be routed to any queues. Type : Boolean Example : true Default value : false slowConsumerCheckPeriod How often, in seconds , that the broker checks for slow consumers. Type : integer Example : 15 Default value : 5 slowConsumerPolicy Specifies what happens when a slow consumer is identified. Valid options are KILL or NOTIFY . KILL kills the consumer's connection, which impacts any client threads using that same connection. NOTIFY sends a CONSUMER_SLOW management notification to the client. Type : string Example : KILL Default value : NOTIFY slowConsumerThreshold Minimum rate of message consumption, in messages per second, before a consumer is considered slow. Type : integer Example : 100 Default value : -1 (not set) brokerProperties Configure broker properties that are not exposed in the broker's Custom Resource Definitions (CRDs) and are, otherwise, not configurable in a Custom Resource(CR). <property name>=<value> A list of property names and values to configure for the broker. One property, globalMaxSize , is currently configurable in the brokerProperties section. Setting the globalMaxSize property overrides the default amount of memory assigned to the broker. By default, a broker is assigned half of the maximum memory available to the broker's Java Virtual Machine. The default unit for the globalMaxSize property is bytes. To change the default unit, add a suffix of m (for MB) or g (for GB) to the value. Type : string Example : globalMaxSize=512m Default value : Not applicable upgrades enabled When you update the value of version to specify a new target version of AMQ Broker, specify whether to allow the Operator to automatically update the deploymentPlan.image value to a broker container image that corresponds to that version of AMQ Broker. Type : Boolean Example : true Default value : false minor Specify whether to allow the Operator to automatically update the deploymentPlan.image value when you update the value of version from one minor version of AMQ Broker to another, for example, from 7.8.0 to 7.10.7 . Type : Boolean Example : true Default value : false version Specify a target minor version of AMQ Broker for which you want the Operator to automatically update the CR to use a corresponding broker container image. For example, if you change the value of version from 7.6.0 to 7.7.0 (and upgrades.enabled and upgrades.minor are both set to true ), then the Operator updates deploymentPlan.image to a broker image of the form registry.redhat.io/amq7/amq-broker-rhel8:7.8-x . Type : string Example : 7.7.0 Default value : Current version of AMQ Broker 8.1.2. Address Custom Resource configuration reference A CR instance based on the address CRD enables you to define addresses and queues for the brokers in your deployment. The following table details the items that you can configure. Important Configuration items marked with an asterisk ( * ) are required in any corresponding Custom Resource (CR) that you deploy. If you do not explicitly specify a value for a non-required item, the configuration uses the default value. Entry Description and usage addressName* Address name to be created on broker. Type : string Example : address0 Default value : Not specified queueName Queue name to be created on broker. If queueName is not specified, the CR creates only the address. Type : string Example : queue0 Default value : Not specified removeFromBrokerOnDelete* Specify whether the Operator removes existing addresses for all brokers in a deployment when you remove the address CR instance for that deployment. The default value is false , which means the Operator does not delete existing addresses when you remove the CR. Type : Boolean Example : true Default value : false routingType* Routing type to be used; anycast or multicast . Type : string Example : anycast Default value : multicast 8.1.3. Security Custom Resource configuration reference A CR instance based on the security CRD enables you to define the security configuration for the brokers in your deployment, including: users and roles login modules, including propertiesLoginModule , guestLoginModule and keycloakLoginModule role based access control console access control Note Many of the options require you understand the broker security concepts described in Securing brokers The following table details the items that you can configure. Important Configuration items marked with an asterisk ( * ) are required in any corresponding Custom Resource (CR) that you deploy. If you do not explicitly specify a value for a non-required item, the configuration uses the default value. Entry Sub-entry Description and usage loginModules One or more login module configurations. A login module can be one of the following types: propertiesLoginModule - allows you define broker users directly. guestLoginModule - for a user who does not have login credentials, or whose credentials fail authentication, you can grant limited access to the broker using a guest account. keycloakLoginModule . - allows you secure brokers using Red Hat Single Sign-On. propertiesLoginModule name* Name of login module. Type : string Example : my-login Default value : Not applicable users.name* Name of user. Type : string Example : jdoe Default value : Not applicable users.password* password of user. Type : string Example : password Default value : Not applicable users.roles Names of roles. Type : string Example : viewer Default value : Not applicable guestLoginModule name* Name of guest login module. Type : string Example : guest-login Default value : Not applicable guestUser Name of guest user. Type : string Example : myguest Default value : Not applicable guestRole Name of role for guest user. Type : string Example : guest Default value : Not applicable keycloakLoginModule name Name for KeycloakLoginModule Type : string Example : sso Default value : Not applicable moduleType Type of KeycloakLoginModule (directAccess or bearerToken) Type : string Example : bearerToken Default value : Not applicable configuration The following configuration items are related to Red Hat Single Sign-On and detailed information is available from the OpenID Connect documentation. configuration.realm* Realm for KeycloakLoginModule Type : string Example : myrealm Default value : Not applicable configuration.realmPublicKey Public key for the realm Type : string Default value : Not applicable configuration.authServerUrl* URL of the keycloak authentication server Type : string Default value : Not applicable configuration.sslRequired Specify whether SSL is required Type : string Valid values are 'all', 'external' and 'none'. configuration.resource* Resource Name The client-id of the application. Each application has a client-id that is used to identify the application. configuration.publicClient Specify whether it is public client. Type : Boolean Default value : false Example : false configuration.credentials.key Specify the credentials key. Type : string Default value : Not applicable Type : string Default value : Not applicable configuration.credentials.value Specify the credentials value Type : string Default value : Not applicable configuration.useResourceRoleMappings Specify whether to use resource role mappings Type : Boolean Example : false configuration.enableCors Specify whether to enable Cross-Origin Resource Sharing (CORS) It will handle CORS preflight requests. It will also look into the access token to determine valid origins. Type : Boolean Default value : false configuration.corsMaxAge CORS max age If CORS is enabled, this sets the value of the Access-Control-Max-Age header. configuration.corsAllowedMethods CORS allowed methods If CORS is enabled, this sets the value of the Access-Control-Allow-Methods header. This should be a comma-separated string. configuration.corsAllowedHeaders CORS allowed headers If CORS is enabled, this sets the value of the Access-Control-Allow-Headers header. This should be a comma-separated string. configuration.corsExposedHeaders CORS exposed headers If CORS is enabled, this sets the value of the Access-Control-Expose-Headers header. This should be a comma-separated string. configuration.exposeToken Specify whether to expose access token Type : Boolean Default value : false configuration.bearerOnly Specify whether to verify bearer token Type : Boolean Default value : false configuration.autoDetectBearerOnly Specify whether to only auto-detect bearer token Type : Boolean Default value : false configuration.connectionPoolSize Size of the connection pool Type : Integer Default value : 20 configuration.allowAnyHostName Specify whether to allow any host name Type : Boolean Default value : false configuration.disableTrustManager Specify whether to disable trust manager Type : Boolean Default value : false configuration.trustStore* Path of a trust store This is REQUIRED unless ssl-required is none or disable-trust-manager is true. configuration.trustStorePassword* Truststore password This is REQUIRED if truststore is set and the truststore requires a password. configuration.clientKeyStore Path of a client keystore Type : string Default value : Not applicable configuration.clientKeyStorePassword Client keystore password Type : string Default value : Not applicable configuration.clientKeyPassword Client key password Type : string Default value : Not applicable configuration.alwaysRefreshToken Specify whether to always refresh token Type : Boolean Example : false configuration.registerNodeAtStartup Specify whether to register node at startup Type : Boolean Example : false configuration.registerNodePeriod Period for re-registering node Type : string Default value : Not applicable configuration.tokenStore Type of token store (session or cookie) Type : string Default value : Not applicable configuration.tokenCookiePath Cookie path for a cookie store Type : string Default value : Not applicable configuration.principalAttribute OpenID Connect ID Token attribute to populate the UserPrincipal name with OpenID Connect ID Token attribute to populate the UserPrincipal name with. If token attribute is null, defaults to sub. Possible values are sub, preferred_username, email, name, nickname, given_name, family_name. configuration.proxyUrl The proxy URL configuration.turnOffChangeSessionIdOnLogin Specify whether to change session id on a successful login Type : Boolean Example : false configuration.tokenMinimumTimeToLive Minimum time to refresh an active access token Type : Integer Default value : 0 configuration.minTimeBetweenJwksRequests Minimum interval between two requests to Keycloak to retrieve new public keys Type : Integer Default value : 10 configuration.publicKeyCacheTtl Maximum interval between two requests to Keycloak to retrieve new public keys Type : Integer Default value : 86400 configuration.ignoreOauthQueryParameter Whether to turn off processing of the access_token query parameter for bearer token processing Type : Boolean Example : false configuration.verifyTokenAudience Verify whether the token contains this client name (resource) as an audience Type : Boolean Example : false configuration.enableBasicAuth Whether to support basic authentication Type : Boolean Default value : false configuration.confidentialPort The confidential port used by the Keycloak server for secure connections over SSL/TLS Type : Integer Example : 8443 configuration.redirectRewriteRules.key The regular expression used to match the Redirect URI. Type : string Default value : Not applicable configuration.redirectRewriteRules.value The replacement String Type : string Default value : Not applicable configuration.scope The OAuth2 scope parameter for DirectAccessGrantsLoginModule Type : string Default value : Not applicable securityDomains Broker security domains brokerDomain.name Broker domain name Type : string Example : activemq Default value : Not applicable brokerDomain.loginModules One or more login modules. Each entry must be previously defined in the loginModules section above. brokerDomain.loginModules.name Name of login module Type : string Example : prop-module Default value : Not applicable brokerDomain.loginModules.flag Same as propertiesLoginModule, required , requisite , sufficient and optional are valid values. Type : string Example : sufficient Default value : Not applicable brokerDomain.loginModules.debug Debug brokerDomain.loginModules.reload Reload consoleDomain.name Broker domain name Type : string Example : activemq Default value : Not applicable consoleDomain.loginModules A single login module configuration. consoleDomain.loginModules.name Name of login module Type : string Example : prop-module Default value : Not applicable consoleDomain.loginModules.flag Same as propertiesLoginModule, required , requisite , sufficient and optional are valid values. Type : string Example : sufficient Default value : Not applicable consoleDomain.loginModules.debug Debug Type : Boolean Example : false consoleDomain.loginModules.reload Reload Type : Boolean Example : true Default : false securitySettings Additional security settings to add to broker.xml or management.xml broker.match The address match pattern for a security setting section. See AMQ Broker wildcard syntax for details about the match pattern syntax. broker.permissions.operationType The operation type of a security setting, as described in Setting permissions . Type : string Example : createAddress Default value : Not applicable broker.permissions.roles The security settings are applied to these roles, as described in Setting permissions . Type : string Example : root Default value : Not applicable securitySettings.management Options to configure management.xml . hawtioRoles The roles allowed to log into the Broker console. Type : string Example : root Default value : Not applicable connector.host The connector host for connecting to the management API. Type : string Example : myhost Default value : localhost connector.port The connector port for connecting to the management API. Type : integer Example : 1099 Default value : 1099 connector.jmxRealm The JMX realm of the management API. Type : string Example : activemq Default value : activemq connector.objectName The JMX object name of the management API. Type : String Example : connector:name=rmi Default : connector:name=rmi connector.authenticatorType The management API authentication type. Type : String Example : password Default : password connector.secured Whether the management API connection is secured. Type : Boolean Example : true Default value : false connector.keyStoreProvider The keystore provider for the management connector. Required if you have set connector.secured="true". The default value is JKS. connector.keyStorePath Location of the keystore. Required if you have set connector.secured="true". connector.keyStorePassword The keystore password for the management connector. Required if you have set connector.secured="true". connector.trustStoreProvider The truststore provider for the management connector Required if you have set connector.secured="true". Type : String Example : JKS Default : JKS connector.trustStorePath Location of the truststore for the management connector. Required if you have set connector.secured="true". Type : string Default value : Not applicable connector.trustStorePassword The truststore password for the management connector. Required if you have set connector.secured="true". Type : string Default value : Not applicable connector.passwordCodec The password codec for management connector The fully qualified class name of the password codec to use as described in Encrypting a password in a configuration file . authorisation.allowedList.domain The domain of allowedList Type : string Default value : Not applicable authorisation.allowedList.key The key of allowedList Type : string Default value : Not applicable authorisation.defaultAccess.method The method of defaultAccess List Type : string Default value : Not applicable authorisation.defaultAccess.roles The roles of defaultAccess List Type : string Default value : Not applicable authorisation.roleAccess.domain The domain of roleAccess List Type : string Default value : Not applicable authorisation.roleAccess.key The key of roleAccess List Type : string Default value : Not applicable authorisation.roleAccess.accessList.method The method of roleAccess List Type : string Default value : Not applicable authorisation.roleAccess.accessList.roles The roles of roleAccess List Type : string Default value : Not applicable applyToCrNames Apply this security config to the brokers defined by the named CRs in the current namespace. A value of * or empty string means applying to all brokers. Type : string Example : my-broker Default value : All brokers defined by CRs in the current namespace. 8.2. Application template parameters Configuration of the AMQ Broker on OpenShift Container Platform image is performed by specifying values of application template parameters. You can configure the following parameters: Table 8.1. Application template parameters Parameter Description AMQ_ADDRESSES Specifies the addresses available by default on the broker on its startup, in a comma-separated list. AMQ_ANYCAST_PREFIX Specifies the anycast prefix applied to the multiplexed protocol ports 61616 and 61617. AMQ_CLUSTERED Enables clustering. AMQ_CLUSTER_PASSWORD Specifies the password to use for clustering. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. AMQ_CLUSTER_USER Specifies the cluster user to use for clustering. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. AMQ_CREDENTIAL_SECRET Specifies the secret in which sensitive credentials such as broker user name/password, cluster user name/password, and truststore and keystore passwords are stored. AMQ_DATA_DIR Specifies the directory for the data. Used in StatefulSets. AMQ_DATA_DIR_LOGGING Specifies the directory for the data directory logging. AMQ_EXTRA_ARGS Specifies additional arguments to pass to artemis create . GLOBAL_MAX_SIZE Specifies the maximum amount of memory that message data can consume. If no value is specified, half of the system's memory is allocated. AMQ_KEYSTORE Specifies the SSL keystore file name. If no value is specified, a random password is generated but SSL will not be configured. AMQ_KEYSTORE_PASSWORD (Optional) Specifies the password used to decrypt the SSL keystore. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. AMQ_KEYSTORE_TRUSTSTORE_DIR Specifies the directory where the secrets are mounted. The default value is /etc/amq-secret-volume . AMQ_MAX_CONNECTIONS For SSL only, specifies the maximum number of connections that an acceptor will accept. AMQ_MULTICAST_PREFIX Specifies the multicast prefix applied to the multiplexed protocol ports 61616 and 61617. AMQ_NAME Specifies the name of the broker instance. The default value is amq-broker . AMQ_PASSWORD Specifies the password used for authentication to the broker. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. AMQ_PROTOCOL Specifies the messaging protocols used by the broker in a comma-separated list. Available options are amqp , mqtt , openwire , stomp , and hornetq . If none are specified, all protocols are available. Note that for integration of the image with Red Hat JBoss Enterprise Application Platform, the OpenWire protocol must be specified, while other protocols can be optionally specified as well. AMQ_QUEUES Specifies the queues available by default on the broker on its startup, in a comma-separated list. AMQ_REQUIRE_LOGIN If set to true , login is required. If not specified, or set to false , anonymous access is permitted. By default, the value of this parameter is not specified. AMQ_ROLE Specifies the name for the role created. The default value is amq . AMQ_TRUSTSTORE Specifies the SSL truststore file name. If no value is specified, a random password is generated but SSL will not be configured. AMQ_TRUSTSTORE_PASSWORD (Optional) Specifies the password used to decrypt the SSL truststore. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. AMQ_USER Specifies the user name used for authentication to the broker. The AMQ Broker application templates use the value of this parameter stored in the secret named in AMQ_CREDENTIAL_SECRET. APPLICATION_NAME Specifies the name of the application used internally within OpenShift. It is used in names of services, pods, and other objects within the application. IMAGE Specifies the image. Used in the persistence , persistent-ssl , and statefulset-clustered templates. IMAGE_STREAM_NAMESPACE Specifies the image stream name space. Used in the ssl and basic templates. OPENSHIFT_DNS_PING_SERVICE_PORT Specifies the port number for the OpenShift DNS ping service. PING_SVC_NAME Specifies the name of the OpenShift DNS ping service. The default value is USDAPPLICATION_NAME-ping if you have specified a value for APPLICATION_NAME . Otherwise, the default value is ping . If you specify a custom value for PING_SVC_NAME , this value overrides the default value. If you want to use templates to deploy multiple broker clusters in the same OpenShift project namespace, you must ensure that PING_SVC_NAME has a unique value for each deployment. VOLUME_CAPACITY Specifies the size of the persistent storage for database volumes. Note If you use broker.xml for a custom configuration, any values specified in that file for the following parameters will override values specified for the same parameters in the your application templates. AMQ_NAME AMQ_ROLE AMQ_CLUSTER_USER AMQ_CLUSTER_PASSWORD 8.3. Logging In addition to viewing the OpenShift logs, you can troubleshoot a running AMQ Broker on OpenShift Container Platform image by viewing the AMQ logs that are output to the container's console. Procedure At the command line, run the following command: Revised on 2024-06-10 15:28:53 UTC
[ "oc logs -f <pass:quotes[<pod-name>]> <pass:quotes[<container-name>]>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/reference-broker-ocp-broker-ocp
Part V. Red Hat build of OptaPlanner starter applications
Part V. Red Hat build of OptaPlanner starter applications Red Hat build of OptaPlanner provides the following starter applications that you can deploy out-of-the-box on Red Hat OpenShift Container Platform: Employee Rostering starter application Vechile Route Planning starter application OptaPlanner starter applications are more developed than examples and quick start guides. They focus on specific use cases and use the best technologies available to build a planning solution.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optaplanner-starter-applications_optaplanner-quickstarts
3.9. Virtual LAN (VLAN)
3.9. Virtual LAN (VLAN) A VLAN (Virtual LAN) is an attribute that can be applied to network packets. Network packets can be "tagged" into a numbered VLAN. A VLAN is a security feature used to completely isolate network traffic at the switch level. VLANs are completely separate and mutually exclusive. The Red Hat Virtualization Manager is VLAN aware and able to tag and redirect VLAN traffic, however VLAN implementation requires a switch that supports VLANs. At the switch level, ports are assigned a VLAN designation. A switch applies a VLAN tag to traffic originating from a particular port, marking the traffic as part of a VLAN, and ensures that responses carry the same VLAN tag. A VLAN can extend across multiple switches. VLAN tagged network traffic on a switch is completely undetectable except by machines connected to a port designated with the correct VLAN. A given port can be tagged into multiple VLANs, which allows traffic from multiple VLANs to be sent to a single port, to be deciphered using software on the machine that receives the traffic.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/virtual_lan_vlan
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/configuring_advanced_cryostat_configurations/making-open-source-more-inclusive
Chapter 22. Internationalization
Chapter 22. Internationalization 22.1. RHEL 8 International Languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangu 22.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see Using langpacks . The glibc package updates for multiple locales are now synchronized with the Common Locale Data Repository (CLDR).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/internationalization_internationalization
3.2. cpu
3.2. cpu The cpu subsystem schedules CPU access to cgroups. Access to CPU resources can be scheduled using two schedulers: Completely Fair Scheduler (CFS) - a proportional share scheduler which divides the CPU time (CPU bandwidth) proportionately between groups of tasks (cgroups) depending on the priority/weight of the task or shares assigned to cgroups. For more information about resource limiting using CFS, refer to Section 3.2.1, "CFS Tunable Parameters" . Real-Time scheduler (RT) - a task scheduler that provides a way to specify the amount of CPU time that real-time tasks can use. For more information about resource limiting of real-time tasks, refer to Section 3.2.2, "RT Tunable Parameters" . 3.2.1. CFS Tunable Parameters In CFS, a cgroup can get more than its share of CPU if there are enough idle CPU cycles available in the system, due to the work conserving nature of the scheduler. This is usually the case for cgroups that consume CPU time based on relative shares. Ceiling enforcement can be used for cases when a hard limit on the amount of CPU that a cgroup can utilize is required (that is, tasks cannot use more than a set amount of CPU time). The following options can be used to configure ceiling enforcement or relative sharing of CPU: Ceiling Enforcement Tunable Parameters cpu.cfs_period_us specifies a period of time in microseconds (ms, represented here as " us ") for how regularly a cgroup's access to CPU resources should be reallocated. If tasks in a cgroup should be able to access a single CPU for 0.2 seconds out of every 1 second, set cpu.cfs_quota_us to 200000 and cpu.cfs_period_us to 1000000 . The upper limit of the cpu.cfs_quota_us parameter is 1 second and the lower limit is 1000 microseconds. cpu.cfs_quota_us specifies the total amount of time in microseconds (ms, represented here as " us ") for which all tasks in a cgroup can run during one period (as defined by cpu.cfs_period_us ). As soon as tasks in a cgroup use up all the time specified by the quota, they are throttled for the remainder of the time specified by the period and not allowed to run until the period. If tasks in a cgroup should be able to access a single CPU for 0.2 seconds out of every 1 second, set cpu.cfs_quota_us to 200000 and cpu.cfs_period_us to 1000000 . Note that the quota and period parameters operate on a CPU basis. To allow a process to fully utilize two CPUs, for example, set cpu.cfs_quota_us to 200000 and cpu.cfs_period_us to 100000 . Setting the value in cpu.cfs_quota_us to -1 indicates that the cgroup does not adhere to any CPU time restrictions. This is also the default value for every cgroup (except the root cgroup). cpu.stat reports CPU time statistics using the following values: nr_periods - number of period intervals (as specified in cpu.cfs_period_us ) that have elapsed. nr_throttled - number of times tasks in a cgroup have been throttled (that is, not allowed to run because they have exhausted all of the available time as specified by their quota). throttled_time - the total time duration (in nanoseconds) for which tasks in a cgroup have been throttled. Relative Shares Tunable Parameters cpu.shares contains an integer value that specifies a relative share of CPU time available to the tasks in a cgroup. For example, tasks in two cgroups that have cpu.shares set to 100 will receive equal CPU time, but tasks in a cgroup that has cpu.shares set to 200 receive twice the CPU time of tasks in a cgroup where cpu.shares is set to 100 . The value specified in the cpu.shares file must be 2 or higher. Note that shares of CPU time are distributed per all CPU cores on multi-core systems. Even if a cgroup is limited to less than 100% of CPU on a multi-core system, it may use 100% of each individual CPU core. Consider the following example: if cgroup A is configured to use 25% and cgroup B 75% of the CPU, starting four CPU-intensive processes (one in A and three in B ) on a system with four cores results in the following division of CPU shares: Table 3.1. CPU share division PID cgroup CPU CPU share 100 A 0 100% of CPU0 101 B 1 100% of CPU1 102 B 2 100% of CPU2 103 B 3 100% of CPU3 Using relative shares to specify CPU access has two implications on resource management that should be considered: Because the CFS does not demand equal usage of CPU, it is hard to predict how much CPU time a cgroup will be allowed to utilize. When tasks in one cgroup are idle and are not using any CPU time, the leftover time is collected in a global pool of unused CPU cycles. Other cgroups are allowed to borrow CPU cycles from this pool. The actual amount of CPU time that is available to a cgroup can vary depending on the number of cgroups that exist on the system. If a cgroup has a relative share of 1000 and two other cgroups have a relative share of 500 , the first cgroup receives 50% of all CPU time in cases when processes in all cgroups attempt to use 100% of the CPU. However, if another cgroup is added with a relative share of 1000 , the first cgroup is only allowed 33% of the CPU (the rest of the cgroups receive 16.5%, 16.5%, and 33% of CPU).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpu
Chapter 6. HardwareData [metal3.io/v1alpha1]
Chapter 6. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HardwareDataSpec defines the desired state of HardwareData. 6.1.1. .spec Description HardwareDataSpec defines the desired state of HardwareData. Type object Property Type Description hardware object The hardware discovered on the host during its inspection. 6.1.2. .spec.hardware Description The hardware discovered on the host during its inspection. Type object Property Type Description cpu object Details of the CPU(s) in the system. firmware object System firmware information. hostname string nics array List of network interfaces for the host. nics[] object NIC describes one network interface on the host. ramMebibytes integer The host's amount of memory in Mebibytes. storage array List of storage (disk, SSD, etc.) available to the host. storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object System vendor information. 6.1.3. .spec.hardware.cpu Description Details of the CPU(s) in the system. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 6.1.4. .spec.hardware.firmware Description System firmware information. Type object Property Type Description bios object The BIOS for this firmware 6.1.5. .spec.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 6.1.6. .spec.hardware.nics Description List of network interfaces for the host. Type array 6.1.7. .spec.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN. 6.1.8. .spec.hardware.nics[].vlans Description The VLANs available Type array 6.1.9. .spec.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN. Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 6.1.10. .spec.hardware.storage Description List of storage (disk, SSD, etc.) available to the host. Type array 6.1.11. .spec.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 6.1.12. .spec.hardware.systemVendor Description System vendor information. Type object Property Type Description manufacturer string productName string serialNumber string 6.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hardwaredata GET : list objects of kind HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata DELETE : delete collection of HardwareData GET : list objects of kind HardwareData POST : create a HardwareData /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} DELETE : delete a HardwareData GET : read the specified HardwareData PATCH : partially update the specified HardwareData PUT : replace the specified HardwareData 6.2.1. /apis/metal3.io/v1alpha1/hardwaredata HTTP method GET Description list objects of kind HardwareData Table 6.1. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty 6.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata HTTP method DELETE Description delete collection of HardwareData Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HardwareData Table 6.3. HTTP responses HTTP code Reponse body 200 - OK HardwareDataList schema 401 - Unauthorized Empty HTTP method POST Description create a HardwareData Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body HardwareData schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 202 - Accepted HardwareData schema 401 - Unauthorized Empty 6.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hardwaredata/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the HardwareData HTTP method DELETE Description delete a HardwareData Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HardwareData Table 6.10. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HardwareData Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HardwareData Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body HardwareData schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK HardwareData schema 201 - Created HardwareData schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/hardwaredata-metal3-io-v1alpha1
Hot Rod Java Client Guide
Hot Rod Java Client Guide Red Hat Data Grid 8.5 Configure and use Hot Rod Java clients Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/hot_rod_java_client_guide/index
Chapter 3. Parameters
Chapter 3. Parameters Each Heat template in the director's template collection contains a parameters section. This section defines all parameters specific to a particular overcloud service. This includes the following: overcloud.j2.yaml - Default base parameters roles_data.yaml - Default parameters for composable roles deployment/*.yaml - Default parameters for specific services You can modify the values for these parameters using the following method: Create an environment file for your custom parameters. Include your custom parameters in the parameter_defaults section of the environment file. Include the environment file with the openstack overcloud deploy command. The few sections contain examples to demonstrate how to configure specific parameters for services in the deployment directory. 3.1. Example 1: Configuring the time zone The Heat template for setting the timezone ( puppet/services/time/timezone.yaml ) contains a TimeZone parameter. If you leave the TimeZone parameter blank, the overcloud sets the time to UTC as a default. To obtain lists of timezones run the timedatectl list-timezones command. The following example command retrieves the timezones for Asia: After you identify your timezone, set the TimeZone parameter in an environment file. The following example environment file sets the value of TimeZone to Asia/Tokyo : 3.2. Example 2: Enabling Networking Distributed Virtual Routing (DVR) DVR is enabled by default in new ML2/OVN deployments and disabled by default in new ML2/OVS deployments. The Heat template for the OpenStack Networking (neutron) API ( deployment/neutron/neutron-api-container-puppet.yaml ) contains a parameter to enable and disable Distributed Virtual Routing (DVR). To disable DVR, use the following in an environment file: 3.3. Example 3: Configuring RabbitMQ File Descriptor Limit For certain configurations, you might need to increase the file descriptor limit for the RabbitMQ server. The deployment/rabbitmq/rabbitmq-container-puppet.yaml Heat template allows you to set the RabbitFDLimit parameter to the limit you require. Add the following to an environment file: 3.4. Example 4: Enabling and Disabling Parameters You might need to initially set a parameter during a deployment, then disable the parameter for a future deployment operation, such as updates or scaling operations. For example, to include a custom RPM during the overcloud creation, include the following: To disable this parameter from a future deployment, it is not enough to remove the parameter. Instead, you set the parameter to an empty value: This ensures the parameter is no longer set for subsequent deployments operations. 3.5. Example 5: Role-based parameters Use the [ROLE]Parameters parameters, replacing [ROLE] with a composable role, to set parameters for a specific role. For example, director configures logrotate on both Controller and Compute nodes. To set a different different logrotate parameters for Controller and Compute nodes, create an environment file that contains both the 'ControllerParameters' and 'ComputeParameters' parameter and set the logrotate parameter for each specific role: 3.6. Identifying Parameters to Modify Red Hat OpenStack Platform director provides many parameters for configuration. In some cases, you might experience difficulty identifying a certain option to configure and the corresponding director parameter. If there is an option you want to configure through the director, use the following workflow to identify and map the option to a specific overcloud parameter: Identify the option you aim to configure. Make a note of the service that uses the option. Check the corresponding Puppet module for this option. The Puppet modules for Red Hat OpenStack Platform are located under /etc/puppet/modules on the director node. Each module corresponds to a particular service. For example, the keystone module corresponds to the OpenStack Identity (keystone). If the Puppet module contains a variable that controls the chosen option, move to the step. If the Puppet module does not contain a variable that controls the chosen option, no hieradata exists for this option. If possible, you can set the option manually after the overcloud completes deployment. Check the director's core Heat template collection for the Puppet variable in the form of hieradata. The templates in deployment/* usually correspond to the Puppet modules of the same services. For example, the deployment/keystone/keystone-container-puppet.yaml template provides hieradata to the keystone module. If the Heat template sets hieradata for the Puppet variable, the template should also disclose the director-based parameter to modify. If the Heat template does not set hieradata for the Puppet variable, use the configuration hooks to pass the hieradata using an environment file. See Section 4.5, "Puppet: Customizing Hieradata for Roles" for more information on customizing hieradata. Workflow Example To change the notification format for OpenStack Identity (keystone), use the workflow and complete the following steps: Identify the OpenStack parameter to configure ( notification_format ). Search the keystone Puppet module for the notification_format setting. For example: In this case, the keystone module manages this option using the keystone::notification_format variable. Search the keystone service template for this variable. For example: The output shows the director using the KeystoneNotificationFormat parameter to set the keystone::notification_format hieradata. The following table shows the eventual mapping: Director Parameter Puppet Hieradata OpenStack Identity (keystone) option KeystoneNotificationFormat keystone::notification_format notification_format You set the KeystoneNotificationFormat in an overcloud's environment file which in turn sets the notification_format option in the keystone.conf file during the overcloud's configuration.
[ "sudo timedatectl list-timezones|grep \"Asia\"", "parameter_defaults: TimeZone: 'Asia/Tokyo'", "parameter_defaults: NeutronEnableDVR: false", "parameter_defaults: RabbitFDLimit: 65536", "parameter_defaults: DeployArtifactURLs: [\"http://www.example.com/myfile.rpm\"]", "parameter_defaults: DeployArtifactURLs: []", "parameter_defaults: ControllerParameters: LogrotateMaxsize: 10M LogrotatePurgeAfterDays: 30 ComputeParameters: LogrotateMaxsize: 20M LogrotatePurgeAfterDays: 15", "grep notification_format /etc/puppet/modules/keystone/manifests/*", "grep \"keystone::notification_format\" /usr/share/openstack-tripleo-heat-templates/deployment/keystone/keystone-container-puppet.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-configuring_base_parameters
8.3. Configure the DSN for Linux Installation
8.3. Configure the DSN for Linux Installation Edit the /opt/redhat/jboss-dv/v6/psqlodbc/etc/odbc.ini file and update it with the correct username, password, and database. The database name is the VDB name. ODBC is enabled in JBoss Data Virtualization on port 35432 by default.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/configure_the_dsn_for_linux_installation1
Chapter 13. Maintaining Satellite Server
Chapter 13. Maintaining Satellite Server This chapter provides information on how to maintain Satellite Server, including information on how to recover Pulp from a full disk, how to work with audit records, and how to clean unused tasks. 13.1. Deleting audit records manually You can use the foreman-rake audits:expire command to remove audit records at any time. Procedure Delete the audit records using the foreman-rake audits:expire command: This command deletes all audit records older than Number_Of_Days . 13.2. Deleting audit records automatically You can automatically delete audit records using the Saved audits interval setting. This setting is empty by default, meaning Satellite does not automatically delete the audit records. Procedure In the Satellite web UI, navigate to Administer > Settings . On the General tab, find the Saved audits interval setting. Set the value of the setting to the number of days after which you want Satellite to delete the audit records. 13.3. Anonymizing audit records You can use the foreman-rake audits:anonymize command to remove any user account or IP information while maintaining the audit records in the database. You can also use a cron job to schedule anonymizing the audit records at the set interval that you want. By default, using the foreman-rake audits:anonymize command anonymizes audit records that are older than 90 days. You can specify the number of days to keep the audit records by adding the days option and add the number of days. For example, if you want to anonymize audit records that are older than seven days, enter the following command: 13.4. Deleting report records Report records are created automatically in Satellite. You can use the foreman-rake reports:expire command to remove reports at any time. You can also use a cron job to schedule report record deletions at the set interval that you want. By default, using the foreman-rake reports:expire command removes report records that are older than 90 days. You can specify the number of days to keep the report records by adding the days option and add the number of days. For example, if you want to delete report records that are older than seven days, enter the following command: 13.5. Configuring the cleaning unused tasks feature Satellite performs regular cleaning to reduce disc space in the database and limit the rate of disk growth. As a result, Satellite backup completes faster and overall performance is higher. By default, Satellite executes a cron job that cleans tasks every day at 19:45. Satellite removes the following tasks during the cleaning: Tasks that have run successfully and are older than thirty days All tasks that are older than a year You can configure the cleaning unused tasks feature using these options: To configure the time at which Satellite runs the cron job, set the --foreman-plugin-tasks-cron-line parameter to the time you want in cron format. For example, to schedule the cron job to run every day at 15:00, enter the following command: To configure the period after which Satellite deletes the tasks, edit the :rules: section in the /etc/foreman/plugins/foreman-tasks.yaml file. To disable regular task cleanup on Satellite, enter the following command: To reenable regular task cleanup on Satellite, enter the following command: 13.6. Deleting task records Task records are created automatically in Satellite. You can use the foreman-rake foreman_tasks:cleanup command to remove tasks at any time. You can also use a cron job to schedule Task record deletions at the set interval that you want. For example, if you want to delete task records from successful repository synchronizations, enter the following command: 13.7. Deleting a task by ID You can delete tasks by ID, for example if you have submitted confidential data by mistake. Procedure Connect to your Satellite Server using SSH: Optional: View the task: Delete the task: Optional: Ensure the task has been removed from Satellite Server: Note that because the task is deleted, this command returns a non-zero exit code. 13.8. Recovering from a full disk The following procedure describes how to resolve the situation when a logical volume (LV) with the Pulp database on it has no free space. Procedure Let running Pulp tasks finish but do not trigger any new ones as they can fail due to the full disk. Ensure that the LV with the /var/lib/pulp directory on it has sufficient free space. Here are some ways to achieve that: Remove orphaned content: This is run weekly so it will not free much space. Change the download policy from Immediate to On Demand for as many repositories as possible and remove already downloaded packages. See the Red Hat Knowledgebase solution How to change syncing policy for Repositories on Satellite from "Immediate" to "On-Demand" on the Red Hat Customer Portal for instructions. Grow the file system on the LV with the /var/lib/pulp directory on it. For more information, see Growing a logical volume and file system in Red Hat Enterprise Linux 8 Configuring and managing logical volumes . Note If you use an untypical file system (other than for example ext3, ext4, or xfs), you might need to unmount the file system so that it is not in use. In that case, complete the following steps: Stop Satellite services: Grow the file system on the LV. Start Satellite services: If some Pulp tasks failed due to the full disk, run them again. 13.9. Reclaiming PostgreSQL space The PostgreSQL database can use a large amount of disk space especially in heavily loaded deployments. Use this procedure to reclaim some of this disk space on Satellite. Procedure Stop all services, except for the postgresql service: Switch to the postgres user and reclaim space on the database: Start the other services when the vacuum completes: 13.10. Reclaiming space from on demand repositories If you set the download policy to on demand, Satellite downloads packages only when the clients request them. You can clean up these packages to reclaim space. Note Space reclamation requires an existing repository. For deleted repositories, wait for the scheduled orphan cleanup or remove orphaned content manually: For a single repository In the Satellite web UI, navigate to Content > Products . Select a product. On the Repositories tab, click the repository name. From the Select Actions list, select Reclaim Space . For multiple repositories In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, select the checkbox of the repositories. Click Reclaim Space at the top right corner. For Capsules In the Satellite web UI, navigate to Infrastructure > Capsules . Select the Capsule Server. Click Reclaim space .
[ "foreman-rake audits:expire days= Number_Of_Days", "foreman-rake audits:anonymize days=7", "foreman-rake reports:expire days=7", "satellite-installer --foreman-plugin-tasks-cron-line \"00 15 * * *\"", "satellite-installer --foreman-plugin-tasks-automatic-cleanup false", "satellite-installer --foreman-plugin-tasks-automatic-cleanup true", "foreman-rake foreman_tasks:cleanup TASK_SEARCH='label = Actions::Katello::Repository::Sync' STATES='stopped'", "ssh [email protected]", "hammer task info --id My_Task_ID", "foreman-rake foreman_tasks:cleanup TASK_SEARCH=\"id= My_Task_ID \"", "hammer task info --id My_Task_ID", "foreman-rake katello:delete_orphaned_content RAILS_ENV=production", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain service stop --exclude postgresql", "su - postgres -c 'vacuumdb --full --all'", "satellite-maintain service start", "foreman-rake katello:delete_orphaned_content" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/maintaining-satellite-server_admin
Chapter 9. Quotas
Chapter 9. Quotas 9.1. Resource quotas per project A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them. 9.1.1. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 9.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 9.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 9.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 9.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 9.1.3. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 9.1.4. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 9.1.5. Sample resource quota definitions core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 services.loadbalancers: "2" 6 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. 6 The total number of services of type LoadBalancer that can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 limits.cpu: "2" 4 limits.memory: 2Gi 5 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 5 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 scopes: - NotTerminating 4 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods fall under NotTerminating unless the RestartNever policy is applied. compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 scopes: - Terminating 4 1 The total number of pods in a terminating state. 2 Across all pods in a terminating state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a terminating state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota charges for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 8 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 9 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. 9.1.6. Creating a quota You can create a quota to constrain resource usage in a given project. Procedure Define the quota in a file. Use the file to create the quota and apply it to a project: USD oc create -f <file> [-n <project_name>] For example: USD oc create -f core-object-counts.yaml -n demoproject 9.1.6.1. Creating object count quotas You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project. Procedure To configure an object count quota for a resource: Run the following command: USD oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1 1 The <resource> variable is the name of the resource, and <group> is the API group, if applicable. Use the oc api-resources command for a list of resources and their associated API groups. For example: USD oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 Example output resourcequota "test" created This example limits the listed resources to the hard limit in each project in the cluster. Verify that the quota was created: USD oc describe quota test Example output Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 9.1.6.2. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure Determine how many GPUs are available on a node in your cluster. For example: # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0 In this example, 2 GPUs are available. Create a ResourceQuota object to set a quota in the namespace nvidia . In this example, the quota is 1 : Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota: # oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml : apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Create the pod: # oc create -f gpu-pod.yaml Verify that the pod is running: # oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: # oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 9.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10 Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 9.1.8. Configuring explicit resource quotas Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Add a resource quota definition to a project request template: If a project request template does not exist in a cluster: Create a bootstrap project template and output it to a file called template.yaml : USD oc adm create-bootstrap-project-template -o yaml > template.yaml Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template: - apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project. 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot create claims. Create a project request template from the modified template.yaml file in the openshift-config namespace: USD oc create -f template.yaml -n openshift-config Note To include the configuration as a kubectl.kubernetes.io/last-applied-configuration annotation, add the --save-config option to the oc create command. By default, the template is called project-request . If a project request template already exists within a cluster: Note If you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead. List templates in the openshift-config namespace: USD oc get templates -n openshift-config Edit an existing project request template: USD oc edit template <project_request_template> -n openshift-config Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template. If you created a project request template, reference it in the cluster's project configuration resource: Access the project configuration resource for editing: By using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . By using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request : apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: project-request Verify that the resource quota is applied when projects are created: Create a project: USD oc new-project <project_name> List the project's resource quotas: USD oc get resourcequotas Describe the resource quota in detail: USD oc describe resourcequotas <resource_quota_name> 9.2. Resource quotas across multiple projects A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. This guide describes how cluster administrators can set and manage resource quotas across multiple projects. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 9.2.1. Selecting multiple projects during quota creation When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: USD oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20 This creates the following ClusterResourceQuota object: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: "10" secrets: "20" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total: 5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" 1 The ResourceQuotaSpec object that will be enforced over the selected projects. 2 A simple key-value selector for annotations. 3 A label selector that can be used to select projects. 4 A per-namespace map that describes current quota usage in each selected project. 5 The aggregate usage across all selected projects. This multi-project quota document controls all projects requested by <user_name> using the default project request endpoint. You are limited to 10 pods and 20 secrets. Similarly, to select projects based on labels, run this command: USD oc create clusterresourcequota for-name \ 1 --project-label-selector=name=frontend \ 2 --hard=pods=10 --hard=secrets=20 1 Both clusterresourcequota and clusterquota are aliases of the same command. for-name is the name of the ClusterResourceQuota object. 2 To select projects by label, provide a key-value pair by using the format --project-label-selector=key=value . This creates the following ClusterResourceQuota object definition: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend 9.2.2. Viewing applicable cluster resource quotas A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the AppliedClusterResourceQuota resource. Procedure To view quotas applied to a project, run: USD oc describe AppliedClusterResourceQuota Example output Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20 9.2.3. Selection granularity Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects.
[ "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/quotas
Chapter 25. Load balancing with MetalLB
Chapter 25. Load balancing with MetalLB 25.1. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . For more information about how to install the MetalLB Operator, see About MetalLB and the MetalLB Operator . 25.1.1. About the IPAddressPool custom resource The fields for the IPAddressPool custom resource are described in the following tables. Table 25.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.io/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.io/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. You can assign IP addresses from an IPAddressPool to services and namespaces by configuring the spec.serviceAllocation specification. Table 25.2. MetalLB IPAddressPool custom resource spec.serviceAllocation subfields Field Type Description priority int Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. namespaces array (string) Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. namespaceSelectors array (LabelSelector) Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. serviceSelectors array (LabelSelector) Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. 25.1.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 25.1.3. Configure MetalLB address pool for VLAN As a cluster administrator, you can add address pools to your cluster to control the IP addresses on a created VLAN that MetalLB can assign to load-balancer services Prerequisites Install the OpenShift CLI ( oc ). Configure a separate VLAN. Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool-vlan.yaml , that is similar to the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. 2 This IP range must match the subnet assigned to the VLAN on your network. To support layer 2 (L2) mode, the IP address range must be within the same subnet as the cluster nodes. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool-vlan.yaml To ensure this configuration applies to the VLAN you need to set the spec gatewayConfig.ipForwarding to Global . Run the following command to edit the network configuration custom resource (CR): USD oc edit network.config.openshift/cluster Update the spec.defaultNetwork.ovnKubernetesConfig section to include the gatewayConfig.ipForwarding set to Global . It should look something like this: Example ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global ... 25.1.4. Example address pool configurations 25.1.4.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 25.1.4.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 25.1.4.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 25.1.4.4. Example: Assign IP address pools to services or namespaces You can assign IP addresses from an IPAddressPool to services and namespaces that you specify. If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority. Note You can use the matchLabels label selector, the matchExpressions label selector, or both, for the namespaceSelectors and serviceSelectors specifications. This example demonstrates one label selector for each specification. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1 1 Assign a priority to the address pool. A lower number indicates a higher priority. 2 Assign one or more namespaces to the IP address pool in a list format. 3 Assign one or more namespace labels to the IP address pool by using label selectors in a list format. 4 Assign one or more service labels to the IP address pool by using label selectors in a list format. 25.1.5. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 25.2. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 25.2.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 25.3. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 25.2.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 25.2.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 25.2.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 25.2.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 25.2.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 25.2.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 25.4. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 25.2.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 25.2.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 25.2.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces field in the L2Advertisement custom resource definition is used to restrict those network interfaces that advertise the IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 25.2.9. Configuring MetalLB with secondary networks From OpenShift Container Platform 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces. Note OpenShift Container Platform clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding. To enable IP forwarding for the secondary interface, you have two options: Enable IP forwarding for a specific interface. Enable IP forwarding for all interfaces. Note Enabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting. 25.2.9.1. Enabling IP forwarding for a specific interface Procedure Patch the Cluster Network Operator, setting the parameter routingViaHost to true , by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig": {"routingViaHost": true} }}}}' --type=merge Enable forwarding for a specific secondary interface, such as bridge-net by creating and applying a MachineConfig CR: Base64-encode the string that is used to configure network kernel parameters by running the following command on your local machine: USD echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0 Example output bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= Create the MachineConfig CR to enable IP forwarding for the specified secondary interface named bridge-net . Save the following YAML in the enable-ip-forward.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= 2 verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: "" 1 Node role where you want to enable IP forwarding, for example, worker 2 Populate with the generated base64 string Apply the configuration by running the following command: USD oc apply -f enable-ip-forward.yaml Verification After you apply the machine config, verify the changes by following this procedure: Enter into a debug session on the target node by running the following command: USD oc debug node/<node-name> This step instantiates a debug pod called <node-name>-debug . Set /host as the root directory within the debug shell by running the following command: USD chroot /host The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths. Verify that IP forwarding is enabled by running the following command: USD cat /etc/sysctl.d/enable-global-forwarding.conf Expected output net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0 The output indicates that IPv4 and IPv6 packet forwarding is enabled on the bridge-net interface. 25.2.9.2. Enabling IP forwarding globally Enable IP forwarding globally by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}} 25.2.10. Additional resources Configuring a community alias . 25.3. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 25.3.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 25.5. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System Number (ASN) for the local end of the BGP session. In all BGP peer custom resources that you add, specify the same value . The range is 0 to 4294967295 . spec.peerASN integer Specifies the ASN for the remote end of the BGP session. The range is 0 to 4294967295 . If you use this field, you cannot specify a value in the spec.dynamicASN field. spec.dynamicASN string Detects the ASN to use for the remote end of the session without explicitly setting it. Specify internal for a peer with the same ASN, or external for a peer with a different ASN. If you use this field, you cannot specify a value in the spec.peerASN field. spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. connectTime duration Specifies how long BGP waits between connection attempts to a neighbor. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 25.3.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 25.3.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 25.3.4. Exposing a service through a network VRF You can expose a service through a virtual routing and forwarding (VRF) instance by associating a VRF on a network interface with a BGP peer. Important Exposing a service through a VRF on a BGP peer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By using a VRF on a network interface to expose a service through a BGP peer, you can segregate traffic to the service, configure independent routing decisions, and enable multi-tenancy support on a network interface. Note By establishing a BGP session through an interface belonging to a network VRF, MetalLB can advertise services through that interface and enable external traffic to reach the service through this interface. However, the network VRF routing table is different from the default VRF routing table used by OVN-Kubernetes. Therefore, the traffic cannot reach the OVN-Kubernetes network infrastructure. To enable the traffic directed to the service to reach the OVN-Kubernetes network infrastructure, you must configure routing rules to define the hops for network traffic. See the NodeNetworkConfigurationPolicy resource in "Managing symmetric routing with MetalLB" in the Additional resources section for more information. These are the high-level steps to expose a service through a network VRF with a BGP peer: Define a BGP peer and add a network VRF instance. Specify an IP address pool for MetalLB. Configure a BGP route advertisement for MetalLB to advertise a route using the specified IP address pool and the BGP peer associated with the VRF instance. Deploy a service to test the configuration. Prerequisites You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You defined a NodeNetworkConfigurationPolicy to associate a Virtual Routing and Forwarding (VRF) instance with a network interface. For more information about completing this prerequisite, see the Additional resources section. You installed MetalLB on your cluster. Procedure Create a BGPPeer custom resources (CR): Create a file, such as frrviavrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the network VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Note You must configure this network VRF instance in a NodeNetworkConfigurationPolicy CR. See the Additional resources for more information. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frrviavrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create a Namespace , Deployment , and Service CR: Create a file, such as deploy-service.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: ["/bin/sh", "-c"] args: ["sleep INF"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer Apply the configuration for the namespace, deployment, and service by running the following command: USD oc apply -f deploy-service.yaml Verification Identify a MetalLB speaker pod by running the following command: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m Verify that the state of the BGP session is Established in the speaker pod by running the following command, replacing the variables to match your configuration: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh" Example output BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ... Verify that the service is advertised correctly by running the following command: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4" Additional resources About virtual routing and forwarding Example: Network interface with a VRF instance node network configuration policy Configuring an egress service Managing symmetric routing with MetalLB 25.3.5. Example BGP peer configurations 25.3.5.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 25.3.5.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 25.3.5.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 25.3.6. steps Configuring services to use MetalLB 25.4. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 25.4.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 25.6. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 25.7. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 25.4.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 25.5. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 25.5.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 25.8. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 25.5.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 25.5.3. steps Configure a BGP peer to use the BFD profile. 25.6. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 25.6.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 25.6.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.io/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 25.6.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 25.6.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.io/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: "web-server-svc" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7 1 Specify the same value for the metallb.io/allow-shared-ip annotation. This value is referred to as the sharing key . 2 Specify different port numbers for the services. 3 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 25.6.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output 1 The annotation is present if you request an IP address from a specific pool. 2 The service type must indicate LoadBalancer . 3 The load-balancer ingress field indicates the external IP address if the service is assigned correctly. 4 The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 25.7. Managing symmetric routing with MetalLB As a cluster administrator, you can effectively manage traffic for pods behind a MetalLB load-balancer service with multiple host interfaces by implementing features from MetalLB, NMState, and OVN-Kubernetes. By combining these features in this context, you can provide symmetric routing, traffic segregation, and support clients on different networks with overlapping CIDR addresses. To achieve this functionality, learn how to implement virtual routing and forwarding (VRF) instances with MetalLB, and configure egress services. Important Configuring symmetric traffic by using a VRF instance with MetalLB and an egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 25.7.1. Challenges of managing symmetric routing with MetalLB When you use MetalLB with multiple host interfaces, MetalLB exposes and announces a service through all available interfaces on the host. This can present challenges relating to network isolation, asymmetric return traffic and overlapping CIDR addresses. One option to ensure that return traffic reaches the correct client is to use static routes. However, with this solution, MetalLB cannot isolate the services and then announce each service through a different interface. Additionally, static routing requires manual configuration and requires maintenance if remote sites are added. A further challenge of symmetric routing when implementing a MetalLB service is scenarios where external systems expect the source and destination IP address for an application to be the same. The default behavior for OpenShift Container Platform is to assign the IP address of the host network interface as the source IP address for traffic originating from pods. This is problematic with multiple host interfaces. You can overcome these challenges by implementing a configuration that combines features from MetalLB, NMState, and OVN-Kubernetes. 25.7.2. Overview of managing symmetric routing by using VRFs with MetalLB You can overcome the challenges of implementing symmetric routing by using NMState to configure a VRF instance on a host, associating the VRF instance with a MetalLB BGPPeer resource, and configuring an egress service for egress traffic with OVN-Kubernetes. Figure 25.1. Network overview of managing symmetric routing by using VRFs with MetalLB The configuration process involves three stages: 1. Define a VRF and routing rules Configure a NodeNetworkConfigurationPolicy custom resource (CR) to associate a VRF instance with a network interface. Use the VRF routing table to direct ingress and egress traffic. 2. Link the VRF to a MetalLB BGPPeer Configure a MetalLB BGPPeer resource to use the VRF instance on a network interface. By associating the BGPPeer resource with the VRF instance, the designated network interface becomes the primary interface for the BGP session, and MetalLB advertises the services through this interface. 3. Configure an egress service Configure an egress service to choose the network associated with the VRF instance for egress traffic. Optional: Configure an egress service to use the IP address of the MetalLB load-balancer service as the source IP for egress traffic. 25.7.3. Configuring symmetric routing by using VRFs with MetalLB You can configure symmetric network routing for applications behind a MetalLB service that require the same ingress and egress network paths. This example associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a LoadBalancer service. Note If you use the sourceIPBy: "LoadBalancerIP" setting in the EgressService CR, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). You can use the sourceIPBy: "Network" setting on clusters that use OVN-Kubernetes configured with the gatewayConfig.routingViaHost specification set to true only. Additionally, if you use the sourceIPBy: "Network" setting, you must schedule the application workload on nodes configured with the network VRF instance. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Kubernetes NMState Operator. Install the MetalLB Operator. Procedure Create a NodeNetworkConfigurationPolicy CR to define the VRF instance: Create a file, such as node-network-vrf.yaml , with content like the following example: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.130.1 -hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 - ip-to: 169.254.0.0/17 priority: 998 route-table: 254 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface that the VRF attaches to. 6 The name of the route table ID for the VRF. 7 The IPv4 address of the interface associated with the VRF. 8 Defines the configuration for network routes. The -hop-address field defines the IP address of the hop for the route. The -hop-interface field defines the outgoing interface for the route. In this example, the VRF routing table is 2 , which references the ID that you define in the EgressService CR. 9 Defines additional route rules. The ip-to fields must match the Cluster Network CIDR, Service Network CIDR, and Internal Masquerade subnet CIDR. You can view the values for these CIDR address specifications by running the following command: oc describe network.operator/cluster . 10 The main routing table that the Linux kernel uses when calculating routes has the ID 254 . Apply the policy by running the following command: USD oc apply -f node-network-vrf.yaml Create a BGPPeer custom resource (CR): Create a file, such as frr-via-vrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frr-via-vrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: "" 2 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. 2 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create an EgressService CR: Create a file, such as egress-service.yaml , with content like the following example: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: "LoadBalancerIP" 3 nodeSelector: matchLabels: vrf: "true" 4 network: "2" 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 This example assigns the LoadBalancer service ingress IP address as the source IP address for egress traffic. 4 If you specify LoadBalancer for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. In this example, only a node with the label vrf: "true" can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: "" . 5 Specify the routing table ID for egress traffic. Ensure that the value matches the route-table-id ID defined in the NodeNetworkConfigurationPolicy resource, for example, route-table-id: 2 . Apply the configuration for the egress service by running the following command: USD oc apply -f egress-service.yaml Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. Optional: If you assigned the LoadBalancer service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources About virtual routing and forwarding Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Configuring an egress service 25.8. Configuring the integration of MetalLB and FRR-K8s FRRouting (FRR) is a free, open source internet routing protocol suite for Linux and UNIX platforms. FRR-K8s is a Kubernetes based DaemonSet that exposes a subset of the FRR API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration custom resource (CR) to access some of the FRR services not provided by MetalLB, for example, receiving routes. MetalLB generates the FRR-K8s configuration corresponding to the MetalLB configuration applied. Warning When configuring Virtual Route Forwarding (VRF) users must change their VRFs to a table ID lower than 1000 as higher than 1000 is reserved for OpenShift Container Platform. 25.8.1. FRR configurations You can create multiple FRRConfiguration CRs to use FRR services in MetalLB . MetalLB generates an FRRConfiguration object which FRR-K8s merges with all other configurations that all users have created. For example, you can configure FRR-K8s to receive all of the prefixes advertised by a given neighbor. The following example configures FRR-K8s to receive all of the prefixes advertised by a BGPPeer with host 172.18.0.5 : Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all You can also configure FRR-K8s to always block a set of prefixes, regardless of the configuration applied. This can be useful to avoid routes towards the pods or ClusterIPs CIDRs that might result in cluster malfunctions. The following example blocks the set of prefixes 192.168.1.0/24 : Example MetalLB CR apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24 You can set FRR-K8s to block the Cluster Network CIDR and Service Network CIDR. You can view the values for these CIDR address specifications by running the following command: USD oc describe network.config/cluster 25.8.2. Configuring the FRRConfiguration CRD The following section provides reference examples that use the FRRConfiguration custom resource (CR). 25.8.2.1. The routers field You can use the routers field to configure multiple routers, one for each Virtual Routing and Forwarding (VRF) resource. For each router, you must define the Autonomous System Number (ASN). You can also define a list of Border Gateway Protocol (BGP) neighbors to connect to, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179 25.8.2.2. The toAdvertise field By default, FRR-K8s does not advertise the prefixes configured as part of a router configuration. In order to advertise them, you use the toAdvertise field. You can advertise a subset of the prefixes, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises a subset of prefixes. The following example shows you how to advertise all of the prefixes: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises all prefixes. 25.8.2.3. The toReceive field By default, FRR-K8s does not process any prefixes advertised by a neighbor. You can use the toReceive field to process such addresses. You can configure for a subset of the prefixes, as in this example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2 1 2 The prefix is applied if the prefix length is less than or equal to the le prefix length and greater than or equal to the ge prefix length. The following example configures FRR to handle all the prefixes announced: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all 25.8.2.4. The bgp field You can use the bgp field to define various BFD profiles and associate them with a neighbor. In the following example, BFD backs up the BGP session and FRR can detect link failures: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile 25.8.2.5. The nodeSelector field By default, FRR-K8s applies the configuration to all nodes where the daemon is running. You can use the nodeSelector field to specify the nodes to which you want to apply the configuration. For example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: "bar" The fields for the FRRConfiguration custom resource are described in the following table: Table 25.9. MetalLB FRRConfiguration custom resource Field Type Description spec.bgp.routers array Specifies the routers that FRR is to configure (one per VRF). spec.bgp.routers.asn integer The Autonomous System Number (ASN) to use for the local end of the session. spec.bgp.routers.id string Specifies the ID of the bgp router. spec.bgp.routers.vrf string Specifies the host vrf used to establish sessions from this router. spec.bgp.routers.neighbors array Specifies the neighbors to establish BGP sessions with. spec.bgp.routers.neighbors.asn integer Specifies the ASN to use for the remote end of the session. If you use this field, you cannot specify a value in the spec.bgp.routers.neighbors.dynamicASN field. spec.bgp.routers.neighbors.dynamicASN string Detects the ASN to use for the remote end of the session without explicitly setting it. Specify internal for a neighbor with the same ASN, or external for a neighbor with a different ASN. If you use this field, you cannot specify a value in the spec.bgp.routers.neighbors.asn field. spec.bgp.routers.neighbors.address string Specifies the IP address to establish the session with. spec.bgp.routers.neighbors.port integer Specifies the port to dial when establishing the session. Defaults to 179. spec.bgp.routers.neighbors.password string Specifies the password to use for establishing the BGP session. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.passwordSecret string Specifies the name of the authentication secret for the neighbor. The secret must be of type "kubernetes.io/basic-auth", and in the same namespace as the FRR-K8s daemon. The key "password" stores the password in the secret. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.holdTime duration Specifies the requested BGP hold time, per RFC4271. Defaults to 180s. spec.bgp.routers.neighbors.keepaliveTime duration Specifies the requested BGP keepalive time, per RFC4271. Defaults to 60s . spec.bgp.routers.neighbors.connectTime duration Specifies how long BGP waits between connection attempts to a neighbor. spec.bgp.routers.neighbors.ebgpMultiHop boolean Indicates if the BGPPeer is multi-hops away. spec.bgp.routers.neighbors.bfdProfile string Specifies the name of the BFD Profile to use for the BFD session associated with the BGP session. If not set, the BFD session is not set up. spec.bgp.routers.neighbors.toAdvertise.allowed array Represents the list of prefixes to advertise to a neighbor, and the associated properties. spec.bgp.routers.neighbors.toAdvertise.allowed.prefixes string array Specifies the list of prefixes to advertise to a neighbor. This list must match the prefixes that you define in the router. spec.bgp.routers.neighbors.toAdvertise.allowed.mode string Specifies the mode to use when handling the prefixes. You can set to filtered to allow only the prefixes in the prefixes list. You can set to all to allow all the prefixes configured on the router. spec.bgp.routers.neighbors.toAdvertise.withLocalPref array Specifies the prefixes associated with an advertised local preference. You must specify the prefixes associated with a local preference in the prefixes allowed to be advertised. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.prefixes string array Specifies the prefixes associated with the local preference. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.localPref integer Specifies the local preference associated with the prefixes. spec.bgp.routers.neighbors.toAdvertise.withCommunity array Specifies the prefixes associated with an advertised BGP community. You must include the prefixes associated with a local preference in the list of prefixes that you want to advertise. spec.bgp.routers.neighbors.toAdvertise.withCommunity.prefixes string array Specifies the prefixes associated with the community. spec.bgp.routers.neighbors.toAdvertise.withCommunity.community string Specifies the community associated with the prefixes. spec.bgp.routers.neighbors.toReceive array Specifies the prefixes to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed array Specifies the information that you want to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.prefixes array Specifies the prefixes allowed from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.mode string Specifies the mode to use when handling the prefixes. When set to filtered , only the prefixes in the prefixes list are allowed. When set to all , all the prefixes configured on the router are allowed. spec.bgp.routers.neighbors.disableMP boolean Disables MP BGP to prevent it from separating IPv4 and IPv6 route exchanges into distinct BGP sessions. spec.bgp.routers.prefixes string array Specifies all prefixes to advertise from this router instance. spec.bgp.bfdProfiles array Specifies the list of bfd profiles to use when configuring the neighbors. spec.bgp.bfdProfiles.name string The name of the BFD Profile to be referenced in other parts of the configuration. spec.bgp.bfdProfiles.receiveInterval integer Specifies the minimum interval at which this system can receive control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.transmitInterval integer Specifies the minimum transmission interval, excluding jitter, that this system wants to use to send BFD control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.detectMultiplier integer Configures the detection multiplier to determine packet loss. To determine the connection loss-detection timer, multiply the remote transmission interval by this value. spec.bgp.bfdProfiles.echoInterval integer Configures the minimal echo receive transmission-interval that this system can handle, in milliseconds. Defaults to 50ms . spec.bgp.bfdProfiles.echoMode boolean Enables or disables the echo transmission mode. This mode is disabled by default, and not supported on multihop setups. spec.bgp.bfdProfiles.passiveMode boolean Mark session as passive. A passive session does not attempt to start the connection and waits for control packets from peers before it begins replying. spec.bgp.bfdProfiles.MinimumTtl integer For multihop sessions only. Configures the minimum expected TTL for an incoming BFD control packet. spec.nodeSelector string Limits the nodes that attempt to apply this configuration. If specified, only those nodes whose labels match the specified selectors attempt to apply the configuration. If it is not specified, all nodes attempt to apply this configuration. status string Defines the observed state of FRRConfiguration. 25.8.3. How FRR-K8s merges multiple configurations In a case where multiple users add configurations that select the same node, FRR-K8s merges the configurations. Each configuration can only extend others. This means that it is possible to add a new neighbor to a router, or to advertise an additional prefix to a neighbor, but not possible to remove a component added by another configuration. 25.8.3.1. Configuration conflicts Certain configurations can cause conflicts, leading to errors, for example: different ASN for the same router (in the same VRF) different ASN for the same neighbor (with the same IP / port) multiple BFD profiles with the same name but different values When the daemon finds an invalid configuration for a node, it reports the configuration as invalid and reverts to the valid FRR configuration. 25.8.3.2. Merging When merging, it is possible to do the following actions: Extend the set of IPs that you want to advertise to a neighbor. Add an extra neighbor with its set of IPs. Extend the set of IPs to which you want to associate a community. Allow incoming routes for a neighbor. Each configuration must be self contained. This means, for example, that it is not possible to allow prefixes that are not defined in the router section by leveraging prefixes coming from another configuration. If the configurations to be applied are compatible, merging works as follows: FRR-K8s combines all the routers. FRR-K8s merges all prefixes and neighbors for each router. FRR-K8s merges all filters for each neighbor. Note A less restrictive filter has precedence over a stricter one. For example, a filter accepting some prefixes has precedence over a filter not accepting any, and a filter accepting all prefixes has precedence over one that accepts some. 25.9. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 25.9.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 25.9.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 25.10. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 25.9.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output 1 The router bgp section indicates the ASN for MetalLB. 2 Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. 3 If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. 4 Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output 1 Confirm that the output includes an IP address for a BGP peer. 25.9.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 25.9.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. Table 25.11. MetalLB BFD metrics Name Description frrk8s_bfd_control_packet_input Counts the number of BFD control packets received from each BFD peer. frrk8s_bfd_control_packet_output Counts the number of BFD control packets sent to each BFD peer. frrk8s_bfd_echo_packet_input Counts the number of BFD echo packets received from each BFD peer. frrk8s_bfd_echo_packet_output Counts the number of BFD echo packets sent to each BFD. frrk8s_bfd_session_down_events Counts the number of times the BFD session with a peer entered the down state. frrk8s_bfd_session_up Indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . frrk8s_bfd_session_up_events Counts the number of times the BFD session with a peer entered the up state. frrk8s_bfd_zebra_notifications Counts the number of BFD Zebra notifications for each BFD peer. Table 25.12. MetalLB BGP metrics Name Description frrk8s_bgp_announced_prefixes_total Counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. frrk8s_bgp_session_up Indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . frrk8s_bgp_updates_total Counts the number of BGP update messages sent to each BGP peer. frrk8s_bgp_opens_sent Counts the number of BGP open messages sent to each BGP peer. frrk8s_bgp_opens_received Counts the number of BGP open messages received from each BGP peer. frrk8s_bgp_notifications_sent Counts the number of BGP notification messages sent to each BGP peer. frrk8s_bgp_updates_total_received Counts the number of BGP update messages received from each BGP peer. frrk8s_bgp_keepalives_sent Counts the number of BGP keepalive messages sent to each BGP peer. frrk8s_bgp_keepalives_received Counts the number of BGP keepalive messages received from each BGP peer. frrk8s_bgp_route_refresh_sent Counts the number of BGP route refresh messages sent to each BGP peer. frrk8s_bgp_total_sent Counts the number of total BGP messages sent to each BGP peer. frrk8s_bgp_total_received Counts the number of total BGP messages received from each BGP peer. Additional resources See Querying metrics for all projects with the monitoring dashboard for information about using the monitoring dashboard. 25.9.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster
[ "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f ipaddresspool.yaml", "oc describe -n metallb-system IPAddressPool doc-example", "Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2", "oc apply -f ipaddresspool-vlan.yaml", "oc edit network.config.openshift/cluster", "spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB", "oc apply -f l2advertisement.yaml", "oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\": {\"routingViaHost\": true} }}}}' --type=merge", "echo -e \"net.ipv4.conf.bridge-net.forwarding = 1\\nnet.ipv6.conf.bridge-net.forwarding = 1\\nnet.ipv4.conf.bridge-net.rp_filter = 0\\nnet.ipv6.conf.bridge-net.rp_filter = 0\" | base64 -w0", "bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= 2 verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: \"\"", "oc apply -f enable-ip-forward.yaml", "oc debug node/<node-name>", "chroot /host", "cat /etc/sysctl.d/enable-global-forwarding.conf", "net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0", "oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "oc apply -f ipaddresspool1.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400", "oc apply -f ipaddresspool2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer1.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer2.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1", "oc apply -f frrviavrf.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32", "oc apply -f first-pool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1", "oc apply -f first-adv.yaml", "apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: [\"/bin/sh\", \"-c\"] args: [\"sleep INF\"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer", "oc apply -f deploy-service.yaml", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m", "oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> neigh\"", "BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09", "oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> ipv4\"", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254", "oc apply -f bfdprofile.yaml", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.io/address-pool: doc-example 1 Selector: app=service_name Type: LoadBalancer 2 IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 3 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: 4 Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.130.1 next-hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 - ip-to: 169.254.0.0/17 priority: 998 route-table: 254", "oc apply -f node-network-vrf.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1", "oc apply -f frr-via-vrf.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32", "oc apply -f first-pool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: \"\" 2", "oc apply -f first-adv.yaml", "apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: \"LoadBalancerIP\" 3 nodeSelector: matchLabels: vrf: \"true\" 4 network: \"2\" 5", "oc apply -f egress-service.yaml", "curl <external_ip_address>:<port_number> 1", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24", "oc describe network.config/cluster", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile", "apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: \"bar\"", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc replace -f setdebugloglevel.yaml", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s", "oc logs -n metallb-system speaker-7m4qw -c speaker", "{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}", "oc logs -n metallb-system speaker-7m4qw -c frr", "Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"", "Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 4 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"", "IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 Total number of neighbors 2", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"", "BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 1 Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"", "Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/load-balancing-with-metallb
21.9. Enabling and Disabling Counters
21.9. Enabling and Disabling Counters The nsslapd-counters parameter enabled counters to run. However, running counters can affect performance, so it also possible to turn off counters. If counters are off, they all have a value of zero (0). By default, counters are already enabled. To enable or disable performance counters, use ldapmodify . For example, to disable:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-counters=off" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/sect.counters
Chapter 8. Working with clusters
Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Dedicated cluster Events in OpenShift Dedicated are modeled based on events that happen to API objects in an OpenShift Dedicated cluster. 8.1.1. Understanding events Events allow OpenShift Dedicated to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "ovn-kubernetes": cannot set "ovn-kubernetes" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #... To view events in your project from the OpenShift Dedicated console. Launch the OpenShift Dedicated console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Dedicated. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.12. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.13. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.14. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.15. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.16. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Dedicated nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Dedicated cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Dedicated cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Dedicated calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Dedicated manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Dedicated allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Dedicated scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Dedicated prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Dedicated are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Dedicated clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. 8.4.2. Understanding OpenJDK settings for OpenShift Dedicated The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. Java-based agents can use the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the Java-based agent image, the OpenShift Dedicated Jenkins Maven agent image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file_name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy OpenShift Dedicated can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction OpenShift Dedicated may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Dedicated administrators can manage container density on nodes by configuring pod placement behavior and per-project resource limits that overcommit cannot exceed. Alternatively, administrators can disable project-level resource overcommitment on customer-created namespaces that are not managed by Red Hat. For more information about container resource management, see Additional resources. 8.5.1. Project-level limits In OpenShift Dedicated, overcommitment of project-level resources is enabled by default. If required by your use case, you can disable overcommitment on projects that are not managed by Red Hat. For the list of projects that are managed by Red Hat and cannot be modified, see "Red Hat Managed resources" in Support . 8.5.1.1. Disabling overcommitment for a project If required by your use case, you can disable overcommitment on any project that is not managed by Red Hat. For a list of projects that cannot be modified, see "Red Hat Managed resources" in Support . Prerequisites You are logged in to the cluster using an account with cluster administrator or cluster editor permissions. Procedure Edit the namespace object file: If you are using the web console: Click Administration Namespaces and click the namespace for the project. In the Annotations section, click the Edit button. Click Add more and enter a new annotation that uses a Key of quota.openshift.io/cluster-resource-override-enabled and a Value of false . Click Save . If you are using the OpenShift CLI ( oc ): Edit the namespace: USD oc edit namespace/<project_name> Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" <.> # ... <.> Setting this annotation to false disables overcommit for this namespace. 8.5.2. Additional resources Restrict resource consumption with limit ranges
[ "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "oc edit namespace/<project_name>", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/nodes/working-with-clusters
Release notes for Red Hat build of OpenJDK 11.0.19
Release notes for Red Hat build of OpenJDK 11.0.19 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.19/index
Preface
Preface Self-hosted engine installation is automated using Ansible. The installation script ( hosted-engine --deploy ) runs on an initial deployment host, and the Red Hat Virtualization Manager (or "engine") is installed and configured on a virtual machine that is created on the deployment host. The Manager and Data Warehouse databases are installed on the Manager virtual machine, but can be migrated to a separate server post-installation if required. Hosts that can run the Manager virtual machine are referred to as self-hosted engine nodes. At least two self-hosted engine nodes are required to support the high availability feature. A storage domain dedicated to the Manager virtual machine is referred to as the self-hosted engine storage domain. This storage domain is created by the installation script, so the underlying storage must be prepared before beginning the installation. See the Planning and Prerequisites Guide for information on environment options and recommended configuration. See Self-Hosted Engine Recommendations for configuration specific to a self-hosted engine environment. Red Hat Virtualization Key Components Component Name Description Red Hat Virtualization Manager A service that provides a graphical user interface and a REST API to manage the resources in the environment. The Manager is installed on a physical or virtual machine running Red Hat Enterprise Linux. Hosts Red Hat Enterprise Linux hosts (RHEL hosts) and Red Hat Virtualization Hosts (image-based hypervisors) are the two supported types of host. Hosts use Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. Shared Storage A storage service is used to store the data associated with virtual machines. Data Warehouse A service that collects configuration information and statistical data from the Manager. Self-Hosted Engine Architecture The Red Hat Virtualization Manager runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Manager is highly available without external HA management. The minimum setup of a self-hosted engine environment includes: One Red Hat Virtualization Manager virtual machine that is hosted on the self-hosted engine nodes. The RHV-M Appliance is used to automate the installation of a Red Hat Enterprise Linux 8 virtual machine, and the Manager on that virtual machine. A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. The HA services run on all self-hosted engine nodes to manage the high availability of the Manager virtual machine. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1. Self-Hosted Engine Red Hat Virtualization Architecture
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/pr01
Chapter 7. Clustering
Chapter 7. Clustering 7.1. Clustering in Red Hat JBoss Data Virtualization Clustering may be used to improve the performance of the system through: Load Balancing See Red Hat JBoss Data Virtualization Development Guide: Client Development for information on load balancing between multiple nodes. Failover See Red Hat JBoss Data Virtualization Development Guide: Client Development for information on failover with multiple nodes. Distributed Caching In cluster mode, resultsset caches and internal materialization caches are shared across the cluster automatically. Therefore no further configuration is needed. Event Distribution Metadata and data modifications will be distributed to all members of a cluster automatically when clustering is configured.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/chap-clustering
8.5 Release Notes
8.5 Release Notes Red Hat Enterprise Linux 8.5 Release Notes for Red Hat Enterprise Linux 8.5 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.5_release_notes/index
2.2. Graphical Installer
2.2. Graphical Installer This section describes what behaviors have changed in the graphical installer. 2.2.1. Devices and Disks Use of the /dev/hd X device name is deprecated on the i386 and x86_64 architecture for IDE drives, and has changed to /dev/sd X . This change does not apply to the PPC architecture. If you have difficulties with the installation not detecting a Smart Array card, enter linux isa at the installer prompt. This lets you manually select the required card. Whereas older IDE drivers supported up to 63 partitions per device, SCSI devices are limited to 15 partitions per device. Anaconda uses the new libata driver in the same fashion as the rest of Red Hat Enterprise Linux, so it is unable to detect more than 15 partitions on an IDE disk during the installation or upgrade process. If you are upgrading a system with more than 15 partitions, migrating the disk to Logical Volume Manager (LVM) is recommended. A change in the way that the kernel handles storage devices means that device names like /dev/hd X or /dev/sd X can differ from the values used in earlier releases. Anaconda solves this problem by relying on partition labels. If these labels are not present, then Anaconda provides a warning that these partitions need to be labeled. Systems that use Logical Volume Management (LVM) and the device mapper usually do not require relabeling. With the inclusion of the Linux Unified Key Setup (LUKS) specification, support is included for installation to encrypted block devices, including the root file system. Refer to the Red Hat Enterprise Linux Installation Guide for more information on LUKS. Not all IDE RAID controllers are supported. If your RAID controller is not yet supported by dmraid , it is possible to combine drives into RAID arrays by configuring Linux software RAID. For supported controllers, configure the RAID functions in the computer BIOS. The version of GRUB included in Red Hat Enterprise Linux 6 now supports ext4, so Anaconda now allows you to use the ext4 file system on any partition, including the /boot and root partitions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-installation-graphical_installer