title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Supported Configurations
|
Chapter 3. Supported Configurations 3.1. Supported configurations For supported hardware and software configurations, see the Red Hat JBoss Data Grid Supported Configurations reference on the Customer Portal at https://access.redhat.com/site/articles/115883 . Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.1_release_notes/chap-supported_configurations
|
Chapter 6. Support
|
Chapter 6. Support Red Hat and Microsoft are committed to providing excellent support for .NET and are working together to resolve any problems that occur on Red Hat supported platforms. At a high level, Red Hat supports the installation, configuration, and running of the .NET component in Red Hat Enterprise Linux (RHEL). Red Hat can also provide "commercially reasonable" support for issues we can help with, for instance, NuGet access problems, permissions issues, firewalls, and application questions. If the issue is a defect or vulnerability in .NET, we actively work with Microsoft to resolve it. .NET 6.0 is supported on RHEL 7, RHEL 8, RHEL 9, and Red Hat OpenShift Container Platform versions 3.3 and later. See .NET Core Life Cycle for information about the .NET support policy 6.1. Contact options There are a couple of ways you can get support, depending on how you are using .NET. If you are using .NET on-premises, you can contact either Red Hat Support or Microsoft directly. If you are using .NET in Microsoft Azure, you can contact either Red Hat Support or Azure Support to receive Integrated Support. Integrated Support is a collaborative support agreement between Red Hat and Microsoft. Customers using Red Hat products in Microsoft Azure are mutual customers, so both companies are united to provide the best troubleshooting and support experience possible. If you are using .NET on IBM Z and LinuxONE, you can contact Red Hat Support . If the Red Hat Support Engineer assigned to your case needs assistance from IBM, the Red Hat Support Engineer will collaborate with IBM directly without any action required from you. 6.2. Frequently asked questions Here are four of the most common support questions for Integrated Support. When do I access Integrated Support? You can engage Red Hat Support directly. If the Red Hat Support Engineer assigned to your case needs assistance from Microsoft, the Red Hat Support Engineer will collaborate with Microsoft directly without any action required from you. Likewise on the Microsoft side, they have a process for directly collaborating with Red Hat Support Engineers. What happens after I file a support case? Once the Red Hat support case has been created, a Red Hat Support Engineer will be assigned to the case and begin collaborating on the issue with you and your Microsoft Support Engineer. You should expect a response to the issue based on Red Hat's Production Support Service Level Agreement . What if I need further assistance? Contact Red Hat Support for assistance in creating your case or with any questions related to this process. You can view any of your open cases here. How do I engage Microsoft for support for an Azure platform issue? If you have support from Microsoft, you can open a case using whatever process you typically would follow. If you do not have support with Microsoft, you can always get support from Microsoft Support . 6.3. Additional support resources The Resources page at Red Hat Developers provides a wealth of information, including: Getting started documents Knowledgebase articles and solutions Blog posts .NET documentation is hosted on a Microsoft website. Here are some additional topics to explore: .NET ASP.NET Core C# F# Visual Basic You can also see more support policy information at Red Hat and Microsoft Azure Certified Cloud & Service Provider Support Policies .
| null |
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/support_release-notes-for-dotnet-rpms
|
8.145. perl-DateTime
|
8.145. perl-DateTime 8.145.1. RHBA-2013:1566 - perl-DateTime bug fix update Updated perl-DateTime packages that fix one bug are now available for Red Hat Enterprise Linux 6. DateTime is a class for the representation of date/time combinations, and is part of the Perl DateTime project. Bug Fix BZ# 978360 Previously, DateTime::Duration did not recognize the leap to 2012-07-01, which led to inaccurate computing time duration through the 2012-06-30T23:59:60 second. To fix this bug, a leap second appended in the end of 2012-06-30 has been added to perl-DateTime leap second database. Time arithmetic using Perl modules DateTime and DateTime::Duration now recognizes the leap from 2012-06-30 second correctly. Users of perl-DateTime are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/perl-datetime
|
Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes
|
Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes As a storage administrator, you can use Red Hat's Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack's file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon ( ceph-mgr ) implements the ability to export Ceph File Systems (CephFS). The Ceph Manager volumes module implements the following file system export abstractions: CephFS volumes CephFS subvolume groups CephFS subvolumes 4.1. Ceph File System volumes As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems. This section describes how to: Create a Ceph file system volume. List Ceph file system volumes. View information about a Ceph file system volume. Remove a Ceph file system volume. 4.1.1. Creating a Ceph file system volume Ceph Orchestrator is a module for Ceph Manager that creates a Metadata Server (MDS) for the Ceph File System (CephFS). This section describes how to create a CephFS volume. Note This creates the Ceph File System, along with the data and metadata pools. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS volume on the monitor node: Syntax Example 4.1.2. Listing Ceph file system volumes This section describes the step to list the Ceph File system (CephFS) volumes. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure List the CephFS volume: Example 4.1.3. Viewing information about a Ceph file system volume You can list basic details about a Ceph File System (CephFS) volume, such as attributes of data and metadata pools of the CephFS volume, pending subvolumes deletion count, and the like. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume created. Procedure View information about a CephFS volume: Syntax Example The output of the ceph fs volume info command includes: mon_addrs : List of monitor addresses. pending_subvolume_deletions : Number of subvolumes pending deletion. pools : Attributes of data and metadata pools. avail : The amount of free space available in bytes. name : Name of the pool. used : The amount of storage consumed in bytes. used_size : Current used size of the CephFS volume in bytes. 4.1.4. Removing a Ceph file system volume Ceph Orchestrator is a module for Ceph Manager that removes the Metadata Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File System (CephFS) volume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure If the mon_allow_pool_delete option is not set to true , then set it to true before removing the CephFS volume: Example Remove the CephFS volume: Syntax Example 4.2. Ceph File System subvolume groups As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes. Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You can only list and remove the existing snapshots of these subvolume groups. This section describes how to: Create a file system subvolume group. Set and manage quotas on a file system subvolume group. List file system subvolume groups. Fetch absolute path of a file system subvolume group. List snapshots of a file system subvolume group. Remove snapshot of a file system subvolume group. Remove a file system subvolume group. 4.2.1. Creating a file system subvolume group This section describes how to create a Ceph File System (CephFS) subvolume group. Note When creating a subvolume group, you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode '755', uid '0', gid '0', and data pool layout of its parent directory. Note See Setting and managing quotas on a file system subvolume group to set quotas while creating a subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At a minimum read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume group: Syntax Example The command succeeds even if the subvolume group already exists. 4.2.2. Setting and managing quotas on a file system subvolume group This section describes how to set and manage quotas on a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Set quotas while creating a subvolume group by providing size in bytes: Syntax Example Resize a subvolume group: Syntax Example Fetch the metadata of a subvolume group: Syntax Example 4.2.3. Listing file system subvolume groups This section describes the step to list the Ceph File System (CephFS) subvolume groups. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure List the CephFS subvolume groups: Syntax Example 4.2.4. Fetching absolute path of a file system subvolume group This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Fetch the absolute path of the CephFS subvolume group: Syntax Example 4.2.5. Listing snapshots of a file system subvolume group This section provides the steps to list the snapshots of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Snapshots of the subvolume group. Procedure List the snapshots of a CephFS subvolume group: Syntax Example 4.2.6. Removing snapshot of a file system subvolume group This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume group: Syntax Example 4.2.7. Removing a file system subvolume group This section shows how to remove the Ceph File System (CephFS) subvolume group. Note The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Remove the CephFS subvolume group: Syntax Example 4.3. Ceph File System subvolumes As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. Additionally, you can also create, list, and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees. This section describes how to: Create a file system subvolume. List file system subvolume. Resizing a file system subvolume. Fetch absolute path of a file system subvolume. Fetch metadata of a file system subvolume. Create snapshot of a file system subvolume. Cloning subvolumes from snapshots. List snapshots of a file system subvolume. Fetching metadata of the snapshots of a file system subvolume. Remove a file system subvolume. Remove snapshot of a file system subvolume. 4.3.1. Creating a file system subvolume This section describes how to create a Ceph File System (CephFS) subvolume. Note When creating a subvolume, you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying the --namespace-isolated option. By default, a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume: Syntax Example The command succeeds even if the subvolume already exists. 4.3.2. Listing file system subvolume This section describes the step to list the Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure List the CephFS subvolume: Syntax Example 4.3.3. Resizing a file system subvolume This section describes the step to resize the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size . The --no_shrink flag prevents the subvolume from shrinking below the currently used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size . Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Resize a CephFS subvolume: Syntax Example 4.3.4. Fetching absolute path of a file system subvolume This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the absolute path of the CephFS subvolume: Syntax Example 4.3.5. Fetching metadata of a file system subvolume This section shows how to fetch metadata of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the metadata of a CephFS subvolume: Syntax Example Example output The output format is JSON and contains the following fields: atime : access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". bytes_pcent : quota used in percentage if quota is set, else displays "undefined". bytes_quota : quota size in bytes if quota is set, else displays "infinite". bytes_used : current used size of the subvolume in bytes. created_at : time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS". ctime : change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". data_pool : data pool the subvolume belongs to. features : features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention". flavor : subvolume version, either 1 for version one or 2 for version two. gid : group ID of subvolume path. mode : mode of subvolume path. mon_addrs : list of monitor addresses. mtime : modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". path : absolute path of a subvolume. pool_namespace : RADOS namespace of the subvolume. state : current state of the subvolume, such as, "complete" or "snapshot-retained". type : subvolume type indicating whether it is a clone or subvolume. uid : user ID of subvolume path. 4.3.6. Creating snapshot of a file system subvolume This section shows how to create snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the Ceph File System subvolume: Syntax Example 4.3.7. Cloning subvolumes from snapshots Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume. Note Cloning is inefficient for very large data sets. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem. Syntax In the following example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a . Example Procedure Create a Ceph File System (CephFS) volume: Syntax Example This creates the CephFS file system, its data and metadata pools. Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory. Syntax Example Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Syntax Example Create a snapshot of a subvolume: Syntax Example Initiate a clone operation: Note By default, cloned subvolumes are created in the default group. If the source subvolume and the target clone are in the default group, run the following command: Syntax Example If the source subvolume is in the non-default group, then specify the source subvolume group in the following command: Syntax Example If the target clone is to a non-default group, then specify the target group in the following command: Syntax Example Check the status of the clone operation: Syntax Example Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide . 4.3.8. Listing snapshots of a file system subvolume This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure List the snapshots of a CephFS subvolume: Syntax Example 4.3.9. Fetching metadata of the snapshots of a file system subvolume This section provides the step to fetch the metadata of the snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with CephFS deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure Fetch the metadata of the snapshots of a CephFS subvolume: Syntax Example Example output The output format is JSON and contains the following fields: created_at : time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff". data_pool : data pool the snapshot belongs to. has_pending_clones : "yes" if snapshot clone is in progress otherwise "no". size : snapshot size in bytes. 4.3.10. Removing a file system subvolume This section describes the step to remove the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents. A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Remove a CephFS subvolume: Syntax Example To recreate a subvolume from a retained snapshot: Syntax NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it to a new subvolume. Example 4.3.11. Removing snapshot of a file system subvolume This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume: Syntax Example Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide . 4.4. Metadata information on Ceph File System subvolumes As a storage administrator, you can set, get, list, and remove metadata information of Ceph File System (CephFS) subvolumes. The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value pairs similar to xattr in a Ceph File System. This section describes how to: Setting custom metadata on the file system subvolume Getting custom metadata on the file system subvolume Listing custom metadata on the file system subvolume Removing custom metadata from the file system subvolume 4.4.1. Setting custom metadata on the file system subvolume You can set custom metadata on the file system subvolume as a key-value pair. Note If the key_name already exists then the old value is replaced by the new value. Note The KEY_NAME and VALUE should be a string of ASCII characters as specified in python's string.printable . The KEY_NAME is case-insensitive and is always stored in lower case. Important Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created. Procedure Set the metadata on the CephFS subvolume: Syntax Example Optional: Set the custom metadata with a space in the KEY_NAME : Example This creates another metadata with KEY_NAME as test meta for the VALUE cluster . Optional: You can also set the same metadata with a different value: Example 4.4.2. Getting custom metadata on the file system subvolume You can get the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Get the metadata on the CephFS subvolume: Syntax Example 4.4.3. Listing custom metadata on the file system subvolume You can list the custom metadata associated with the key of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure List the metadata on the CephFS subvolume: Syntax Example 4.4.4. Removing custom metadata from the file system subvolume You can remove the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Remove the custom metadata on the CephFS subvolume: Syntax Example List the metadata: Example
|
[
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs volume ls",
"ceph fs volume info VOLUME_NAME",
"ceph fs volume info cephfs { \"mon_addrs\": [ \"192.168.1.7:40977\", ], \"pending_subvolume_deletions\": 0, \"pools\": { \"data\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.data\", \"used\": 4096 } ], \"metadata\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.meta\", \"used\": 155648 } ] }, \"used_size\": 0 }",
"ceph config set mon mon_allow_pool_delete true",
"ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]",
"ceph fs volume rm cephfs --yes-i-really-mean-it",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES ] [--pool_layout DATA_POOL_NAME ] [--uid UID ] [--gid GID ] [--mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240",
"ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]",
"ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { \"bytes_used\": 10768679044 }, { \"bytes_quota\": 20737418240 }, { \"bytes_pcent\": \"51.93\" } ]",
"ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup info cephfs subvolgroup_2 { \"atime\": \"2022-10-05 18:00:39\", \"bytes_pcent\": \"51.85\", \"bytes_quota\": 20768679043, \"bytes_used\": 10768679044, \"created_at\": \"2022-10-05 18:00:39\", \"ctime\": \"2022-10-05 18:21:26\", \"data_pool\": \"cephfs.cephfs.data\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"60.221.178.236:1221\", \"205.64.75.112:1221\", \"20.209.241.242:1221\" ], \"mtime\": \"2022-10-05 18:01:25\", \"uid\": 0 }",
"ceph fs subvolumegroup ls VOLUME_NAME",
"ceph fs subvolumegroup ls cephfs",
"ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup getpath cephfs subgroup0",
"ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup snapshot ls cephfs subgroup0",
"ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]",
"ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force",
"ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]",
"ceph fs subvolumegroup rm cephfs subgroup0 --force",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated",
"ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume ls cephfs --group_name subgroup0",
"ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]",
"ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink",
"ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume getpath cephfs sub0 --group_name subgroup0",
"ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume info cephfs sub0 --group_name subgroup0",
"ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }",
"ceph auth get CLIENT_NAME",
"ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2",
"ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME",
"[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"",
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1",
"ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]",
"ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }",
"ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0",
"{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }",
"ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]",
"ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots",
"ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0",
"ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]",
"ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force",
"ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0",
"ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0",
"ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0",
"ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster",
"ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }",
"ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0",
"ceph fs subvolume metadata ls cephfs sub0 {}"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/management-of-ceph-file-system-volumes-subvolume-groups-and-subvolumes
|
Chapter 4. Installing Knative Serving
|
Chapter 4. Installing Knative Serving Installing Knative Serving allows you to create Knative services and functions on your cluster. It also allows you to use additional functionality such as autoscaling and networking options for your applications. After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). For more information about configuration options for the KnativeServing CR, see Global configuration . Important If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless , you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving. 4.1. Installing Knative Serving by using the web console After you install the OpenShift Serverless Operator, install Knative Serving by using the OpenShift Container Platform web console. You can install Knative Serving by using the default settings or configure more advanced settings in the KnativeServing custom resource (CR). Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have logged in to the OpenShift Container Platform web console. You have installed the OpenShift Serverless Operator. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-serving . Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab. Click Create Knative Serving . In the Create Knative Serving page, you can install Knative Serving using the default settings by clicking Create . You can also modify settings for the Knative Serving installation by editing the KnativeServing object using either the form provided, or by editing the YAML. Using the form is recommended for simpler configurations that do not require full control of KnativeServing object creation. Editing the YAML is recommended for more complex configurations that require full control of KnativeServing object creation. You can access the YAML by clicking the edit YAML link in the top right of the Create Knative Serving page. After you complete the form, or have finished modifying the YAML, click Create . Note For more information about configuration options for the KnativeServing custom resource definition, see the documentation on Advanced installation configuration options . After you have installed Knative Serving, the KnativeServing object is created, and you are automatically directed to the Knative Serving tab. You will see the knative-serving custom resource in the list of resources. Verification Click on knative-serving custom resource in the Knative Serving tab. You will be automatically directed to the Knative Serving Overview page. Scroll down to look at the list of Conditions . You should see a list of conditions with a status of True , as shown in the example image. Note It may take a few seconds for the Knative Serving resources to be created. You can check their status in the Resources tab. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. 4.2. Installing Knative Serving by using YAML After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). You can use the following procedure to install Knative Serving by using YAML files and the oc CLI. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator. Install the OpenShift CLI ( oc ). Procedure Create a file named serving.yaml and copy the following example YAML into it: apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving Apply the serving.yaml file: USD oc apply -f serving.yaml Verification To verify the installation is complete, enter the following command: USD oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}' Example output DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True Note It may take a few seconds for the Knative Serving resources to be created. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. Check that the Knative Serving resources have been created: USD oc get pods -n knative-serving Example output NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s Check that the necessary networking components have been installed to the automatically created knative-serving-ingress namespace: USD oc get pods -n knative-serving-ingress Example output NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s 4.3. Additional resources Kourier and Istio ingresses 4.4. steps After installing Knative Serving, you can start creating serverless applications . If you want to use Knative event-driven architecture, see Installing Knative Eventing .
|
[
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving",
"oc apply -f serving.yaml",
"oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s",
"oc get pods -n knative-serving-ingress",
"NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/installing_openshift_serverless/installing-knative-serving
|
Chapter 6. Setting up your container repository
|
Chapter 6. Setting up your container repository When you set up your container repository, you must add a description, include a README, add teams that can access the repository, and tag automation execution environments. 6.1. Prerequisites to setting up your remote registry You are logged in to Ansible Automation Platform. You have permissions to change the repository. 6.2. Adding a README to your container repository Add a README to your container repository to provide instructions to your users on how to work with the container. Automation hub container repositories support Markdown for creating a README. By default, the README is empty. Prerequisites You have permissions to change containers. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Execution Environments . Select your execution environment. On the Detail tab, click Add . In the Raw Markdown text field, enter your README text in Markdown. Click Save when you are finished. After you add a README, you can edit it at any time by clicking Edit and repeating steps 4 and 5. 6.3. Providing access to your automation execution environments Provide access to your automation execution environments for users who need to work with the images. Adding a team allows you to modify the permissions the team can have to the container repository. You can use this option to extend or restrict permissions based on what the team is assigned. Prerequisites You have change container namespace permissions. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Content Execution Environments . Select your automation execution environment. From the Team Access tab, click Add roles . Select the team or teams to which you want to grant access and click . Select the roles that you want to add to this execution environment and click . Click Finish . 6.4. Tagging container images Tag automation execution environments to add an additional name to automation execution environments stored in your automation hub container repository. If no tag is added to an automation execution environment, automation hub defaults to latest for the name. Prerequisites You have change automation execution environment tags permissions. Procedure From the navigation panel, select Automation Content Execution Environments . Select your automation execution environments. Click the Images tab. Click the More Actions icon ... , and click Manage tags . Add a new tag in the text field and click Add . Optional: Remove current tags by clicking x on any of the tags for that image. Verification Click the Activity tab and review the latest changes. 6.5. Creating a credential To pull automation execution environments images from a password or token-protected registry, you must create a credential. In earlier versions of Ansible Automation Platform, you were required to deploy a registry to store execution environment images. On Ansible Automation Platform 2.0 and later, the system operates as if you already have a remote registry up and running. To store execution environment images, add the credentials of only your selected remote registries. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Execution Infrastructure Credentials . Click Create credential to create a new credential. Enter an authorization Name , Description , and Organization . In the Credential Type drop-down, select Container Registry . Enter the Authentication URL . This is the remote registry address. Enter the Username and Password or Token required to log in to the remote registry. Optional: To enable SSL verification, select Verify SSL . Click Create credential . Filling in at least one of the fields organization, user, or team is mandatory, and can be done through the user interface.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/setting-up-container-repository
|
probe::linuxmib.ListenDrops
|
probe::linuxmib.ListenDrops Name probe::linuxmib.ListenDrops - Count of times conn request that were dropped Synopsis linuxmib.ListenDrops Values sk Pointer to the struct sock being acted on op Value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global ListenDrops (equivalent to SNMP's MIB LINUX_MIB_LISTENDROPS)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-linuxmib-listendrops
|
Appendix A. Revision History
|
Appendix A. Revision History Revision History Revision 1.0-1.33.402 Fri Oct 25 2013 Rudiger Landmann Rebuild with Publican 4.0.0 Revision 1.0-1.33 July 24 2012 Ruediger Landmann Rebuild for Publican 3.0 Revision 1.0-1 Thu Sep 18 2008 Don Domingo migrated to new automated build system
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/appe-publican-revision_history
|
Chapter 4. Container images with Rust Toolset on RHEL 8
|
Chapter 4. Container images with Rust Toolset on RHEL 8 On RHEL 8, you can build your own Rust Toolset container images on top of Red Hat Universal Base Images (UBI) containers using Containerfiles. 4.1. Creating a container image of Rust Toolset on RHEL 8 On RHEL 8, Rust Toolset packages are part of the Red Hat Universal Base Images (UBIs) repositories. To keep the container size small, install only individual packages instead of the entire Rust Toolset. Prerequisites An existing Containerfile. For more information on creating Containerfiles, see the Dockerfile reference page. Procedure Visit the Red Hat Container Catalog . Select a UBI. Click Get this image and follow the instructions. To create a container containing Rust Toolset, add the following lines to your Containerfile: To create a container image containing an individual package only, add the following lines to your Containerfile: Replace < package_name > with the name of the package you want to install. 4.2. Additional resources For more information on Red Hat UBI images, see Working with Container Images . For more information on Red Hat UBI repositories, see Universal Base Images (UBI): Images, repositories, packages, and source code .
|
[
"FROM registry.access.redhat.com/ubi8/ubi: latest RUN yum install -y rust-toolset",
"RUN yum install < package-name >"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.79.0_toolset/assembly_container-images-with-comp-toolset
|
Chapter 6. Device Drivers
|
Chapter 6. Device Drivers 6.1. New drivers Network drivers Solarflare Siena network driver ( sfc-siena ), only in IBM Power Systems, Little Endian and AMD and Intel 64-bit architectures Nvidia sn2201 platform driver ( nvsw-sn2201 ), only in AMD and Intel 64-bit architectures AMD SEV Guest Driver ( sev-guest ), only in AMD and Intel 64-bit architectures TDX Guest Driver ( tdx-guest ), only in AMD and Intel 64-bit architectures Graphics drivers and miscellaneous drivers ACPI Video Driver ( video ), only in 64-bit ARM architecture DRM Buddy Allocator ( drm_buddy ), only in 64-bit ARM architecture and IBM Power Systems, Little Endian DRM display adapter helper ( drm_display_helper ), only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures Intel(R) GVT-g for KVM ( kvmgt ), only in AMD and Intel 64-bit architectures HP(R) iLO/iLO2 management processor ( hpilo ), only in 64-bit ARM architecture HPE watchdog driver ( hpwdt ), only in 64-bit ARM architecture AMD HSMP Platform Interface Driver ( amd_hsmp. ), only in AMD and Intel 64-bit architectures 6.2. Updated drivers Network drivers Intel(R) 10 Gigabit PCI Express Network Driver ( ixgbe ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) 10 Gigabit Virtual Function Network Driver ( ixgbevf ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) 2.5G Ethernet Linux Driver ( igc. ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) Ethernet Adaptive Virtual Function Network Driver ( iavf ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) Ethernet Connection XL710 Network Driver ( i40e ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) Ethernet Switch Host Interface Driver ( fm10k ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) Gigabit Ethernet Network Driver ( igb ) has been updated to version 4.18.0-477. (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) Gigabit Virtual Function Network Driver ( igbvf ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Intel(R) PRO/1000 Network Driver ( e1000e ) has been updated to version 4.18.0-477 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Mellanox 5th generation network adapters (ConnectX series) core driver ( mlx5_core ) has been updated to version 4.18.0-477. The Netronome Flow Processor (NFP) driver ( nfp ) has been updated to version 4.18.0-477. Storage drivers Driver for Microchip Smart Family Controller version ( smartpqi ) has been updated to version 2.1.20-035 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). Emulex LightPulse Fibre Channel SCSI driver ( lpfc ) has been updated to version 14.0.0.18 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). LSI MPT Fusion SAS 3.0 Device Driver ( mpt3sas ) has been updated to version 43.100.00.00 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). MPI3 Storage Controller Device Driver ( mpi3mr ) has been updated to version 8.2.0.3.0 (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). QLogic Fibre Channel HBA Driver( qla2xxx ) has been updated to version 10.02.07.900-k (only in 64-bit ARM architecture, IBM Power Systems, Little Endian, and AMD and Intel 64-bit architectures). SCSI debug adapter driver ( scsi_debug ) has been updated to version 0191.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/device_drivers
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/providing-feedback
|
Chapter 2. Executive reports
|
Chapter 2. Executive reports You can download a high-level executive report summarizing the security exposure of your infrastructure. Executive reports are two to three-page PDF files, designed for an executive audience, and include the following information: On page 1 Number of RHEL systems analyzed Number of individual CVEs to which your systems are currently exposed Number of security rules in your infrastructure List of CVEs that have advisories On page 2 Percentage of CVEs by severity (CVSS base score) range Number of CVEs published by 7, 30, and 90 day time frame Top three CVEs in your infrastructure, including security rules and known exploits On page 3 Security rule breakdown by severity Top 3 security rules, including severity and number of exposed systems 2.1. Downloading an executive report Use the following steps to download an executive report for key stakeholders in your security organization: Procedure Navigate to the Security > Vulnerability > Reports tab and log in if necessary. On the Executive report card, click Download PDF . Click Save File and click OK . Verification Verify that the PDF file is in your Downloads folder or other specified location. 2.2. Downloading an executive report using the vulnerability service API You can download an executive report using the vulnerability service API . Request URL: https://console.redhat.com/api/vulnerability/v1/report/executive Curl: curl -X GET "https://console.redhat.com/api/vulnerability/v1/report/executive" -H "accept: application/vnd.api+json"
|
[
"curl -X GET \"https://console.redhat.com/api/vulnerability/v1/report/executive\" -H \"accept: application/vnd.api+json\""
] |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports/con-vuln-report-exec-report
|
Appendix F. Additional Replication High Availability Configuration Elements
|
Appendix F. Additional Replication High Availability Configuration Elements The following table lists additional ha-policy configuration elements that are not described in the Configuring replication high availability section. These elements have default settings that are sufficient for most common use cases. Table F.1. Additional configuration elements for replication high availability Name Used in Description check-for-live-server Embedded broker coordination Applies only to brokers configured as master brokers. Specifies whether the original master broker checks the cluster for another live broker using its own server ID when starting up. Set to true to fail back to the original master broker and avoid a "split brain" situation in which two brokers become live at the same time. The default value of this property is false . cluster-name Embedded broker and ZooKeeper coordination Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured, the cluster configuration with this name will be used when connecting to the cluster. If unset, the first cluster connection defined in the configuration is used. initial-replication-sync-timeout Embedded broker and ZooKeeper coordination The amount of time the replicating broker will wait upon completion of the initial replication process for the replica to acknowledge that it has received all the necessary data. The default value of this property is 30,000 milliseconds. NOTE: During this interval, any other journal-related operations are blocked. max-saved-replicated-journals-size Embedded broker and ZooKeeper coordination Applies to backup brokers only. Specifies how many backup journal files the backup broker retains. Once this value has been reached, the broker makes space for each new backup journal file by deleting the oldest journal file. The default value of this property is 2 .
| null |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/replication_elements
|
6.7. Configuring Fencing for Cluster Members
|
6.7. Configuring Fencing for Cluster Members Once you have completed the initial steps of creating a cluster and creating fence devices, you need to configure fencing for the cluster nodes. To configure fencing for the nodes after creating a new cluster and configuring the fencing devices for the cluster, follow the steps in this section. Note that you must configure fencing for each node in the cluster. Note It is recommended that you configure multiple fencing mechanisms for each node. A fencing device can fail due to network split, a power outage, or a problem in the fencing device itself. Configuring multiple fencing mechanisms can reduce the likelihood that the failure of a fencing device will have fatal results. This section documents the following procedures: Section 6.7.1, "Configuring a Single Power-Based Fence Device for a Node" Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" Section 6.7.3, "Configuring a Backup Fence Device" Section 6.7.4, "Configuring a Node with Redundant Power" Section 6.7.6, "Removing Fence Methods and Fence Instances" 6.7.1. Configuring a Single Power-Based Fence Device for a Node Use the following procedure to configure a node with a single power-based fence device. The fence device is named my_apc , which uses the fence_apc fencing agent. In this example, the device named my_apc was previously configured with the --addfencedev option, as described in Section 6.5, "Configuring Fence Devices" . Add a fence method for the node, providing a name for the fence method. For example, to configure a fence method named APC for the node node-01.example.com in the configuration file on the cluster node node-01.example.com , execute the following command: Add a fence instance for the method. You must specify the fence device to use for the node, the node this instance applies to, the name of the method, and any options for this method that are specific to this node: For example, to configure a fence instance in the configuration file on the cluster node node-01.example.com that uses power port 1 on the APC switch for the fence device named my_apc to fence cluster node node-01.example.com using the method named APC , execute the following command: You will need to add a fence method for each node in the cluster. The following commands configure a fence method for each node with the method name APC . The device for the fence method specifies my_apc as the device name, which is a device previously configured with the --addfencedev option, as described in Section 6.5, "Configuring Fence Devices" . Each node is configured with a unique APC switch power port number: The port number for node-01.example.com is 1 , the port number for node-02.example.com is 2 , and the port number for node-03.example.com is 3 . Example 6.2, " cluster.conf After Adding Power-Based Fence Methods " shows a cluster.conf configuration file after you have added these fencing methods and instances to each node in the cluster. Example 6.2. cluster.conf After Adding Power-Based Fence Methods Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
|
[
"ccs -h host --addmethod method node",
"ccs -h node01.example.com --addmethod APC node01.example.com",
"ccs -h host --addfenceinst fencedevicename node method [ options ]",
"ccs -h node01.example.com --addfenceinst my_apc node01.example.com APC port=1",
"ccs -h node01.example.com --addmethod APC node01.example.com ccs -h node01.example.com --addmethod APC node02.example.com ccs -h node01.example.com --addmethod APC node03.example.com ccs -h node01.example.com --addfenceinst my_apc node01.example.com APC port=1 ccs -h node01.example.com --addfenceinst my_apc node02.example.com APC port=2 ccs -h node01.example.com --addfenceinst my_apc node03.example.com APC port=3",
"<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"my_apc\" passwd=\"password_example\"/> </fencedevices> <rm> </rm> </cluster>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-member-ccs-ca
|
Chapter 2. BuildConfig [build.openshift.io/v1]
|
Chapter 2. BuildConfig [build.openshift.io/v1] Description Build configurations define a build process for new container images. There are three types of builds possible - a container image build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary container images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the container image registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created. Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object BuildConfigSpec describes when and how builds are created status object BuildConfigStatus contains current state of the build config object. 2.1.1. .spec Description BuildConfigSpec describes when and how builds are created Type object Required strategy Property Type Description completionDeadlineSeconds integer completionDeadlineSeconds is an optional duration in seconds, counted from the time when a build pod gets scheduled in the system, that the build may be active on a node before the system actively tries to terminate the build; value must be positive integer failedBuildsHistoryLimit integer failedBuildsHistoryLimit is the number of old failed builds to retain. When a BuildConfig is created, the 5 most recent failed builds are retained unless this value is set. If removed after the BuildConfig has been created, all failed builds are retained. mountTrustedCA boolean mountTrustedCA bind mounts the cluster's trusted certificate authorities, as defined in the cluster's proxy configuration, into the build. This lets processes within a build trust components signed by custom PKI certificate authorities, such as private artifact repositories and HTTPS proxies. When this field is set to true, the contents of /etc/pki/ca-trust within the build are managed by the build container, and any changes to this directory or its subdirectories (for example - within a Dockerfile RUN instruction) are not persisted in the build's output image. nodeSelector object (string) nodeSelector is a selector which must be true for the build pod to fit on a node If nil, it can be overridden by default build nodeselector values for the cluster. If set to an empty map or a map with any values, default build nodeselector values are ignored. output object BuildOutput is input to a build strategy and describes the container image that the strategy should produce. postCommit object A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . 1. Shell script: "postCommit": { "script": "rake test --verbose", } The above is a convenient form which is equivalent to: "postCommit": { "command": ["/bin/sh", "-ic"], "args": ["rake test --verbose"] } 2. A command as the image entrypoint: "postCommit": { "commit": ["rake", "test", "--verbose"] } Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint . 3. Pass arguments to the default entrypoint: "postCommit": { "args": ["rake", "test", "--verbose"] } This form is only useful if the image entrypoint can handle arguments. 4. Shell script with arguments: "postCommit": { "script": "rake test USD1", "args": ["--verbose"] } This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be "/bin/sh" and USD1, USD2, etc, are the positional arguments from Args. 5. Command with arguments: "postCommit": { "command": ["rake", "test"], "args": ["--verbose"] } This form is equivalent to appending the arguments to the Command slice. It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. resources ResourceRequirements resources computes resource requirements to execute the build. revision object SourceRevision is the revision or commit information from the source for the build runPolicy string RunPolicy describes how the new build created from this build configuration will be scheduled for execution. This is optional, if not specified we default to "Serial". serviceAccount string serviceAccount is the name of the ServiceAccount to use to run the pod created by this build. The pod will be allowed to use secrets referenced by the ServiceAccount source object BuildSource is the SCM used for the build. strategy object BuildStrategy contains the details of how to perform a build. successfulBuildsHistoryLimit integer successfulBuildsHistoryLimit is the number of old successful builds to retain. When a BuildConfig is created, the 5 most recent successful builds are retained unless this value is set. If removed after the BuildConfig has been created, all successful builds are retained. triggers array triggers determine how new Builds can be launched from a BuildConfig. If no triggers are defined, a new build can only occur as a result of an explicit client build creation. triggers[] object BuildTriggerPolicy describes a policy for a single trigger that results in a new Build. 2.1.2. .spec.output Description BuildOutput is input to a build strategy and describes the container image that the strategy should produce. Type object Property Type Description imageLabels array imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. imageLabels[] object ImageLabel represents a label applied to the resulting image. pushSecret LocalObjectReference PushSecret is the name of a Secret that would be used for setting up the authentication for executing the Docker push to authentication enabled Docker Registry (or Docker Hub). to ObjectReference to defines an optional location to push the output of this build to. Kind must be one of 'ImageStreamTag' or 'DockerImage'. This value will be used to look up a container image repository to push to. In the case of an ImageStreamTag, the ImageStreamTag will be looked for in the namespace of the build unless Namespace is specified. 2.1.3. .spec.output.imageLabels Description imageLabels define a list of labels that are applied to the resulting image. If there are multiple labels with the same name then the last one in the list is used. Type array 2.1.4. .spec.output.imageLabels[] Description ImageLabel represents a label applied to the resulting image. Type object Required name Property Type Description name string name defines the name of the label. It must have non-zero length. value string value defines the literal value of the label. 2.1.5. .spec.postCommit Description A BuildPostCommitSpec holds a build post commit hook specification. The hook executes a command in a temporary container running the build output image, immediately after the last layer of the image is committed and before the image is pushed to a registry. The command is executed with the current working directory (USDPWD) set to the image's WORKDIR. The build will be marked as failed if the hook execution fails. It will fail if the script or command return a non-zero exit code, or if there is any other error related to starting the temporary container. There are five different ways to configure the hook. As an example, all forms below are equivalent and will execute rake test --verbose . Shell script: A command as the image entrypoint: Pass arguments to the default entrypoint: Shell script with arguments: Command with arguments: It is invalid to provide both Script and Command simultaneously. If none of the fields are specified, the hook is not executed. Type object Property Type Description args array (string) args is a list of arguments that are provided to either Command, Script or the container image's default entrypoint. The arguments are placed immediately after the command to be run. command array (string) command is the command to run. It may not be specified with Script. This might be needed if the image doesn't have /bin/sh , or if you do not want to use a shell. In all other cases, using Script might be more convenient. script string script is a shell script to be run with /bin/sh -ic . It may not be specified with Command. Use Script when a shell script is appropriate to execute the post build hook, for example for running unit tests with rake test . If you need control over the image entrypoint, or if the image does not have /bin/sh , use Command and/or Args. The -i flag is needed to support CentOS and RHEL images that use Software Collections (SCL), in order to have the appropriate collections enabled in the shell. E.g., in the Ruby image, this is necessary to make ruby , bundle and other binaries available in the PATH. 2.1.6. .spec.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 2.1.7. .spec.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 2.1.8. .spec.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 2.1.9. .spec.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 2.1.10. .spec.source Description BuildSource is the SCM used for the build. Type object Property Type Description binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. configMaps array configMaps represents a list of configMaps and their destinations that will be used for the build. configMaps[] object ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. contextDir string contextDir specifies the sub-directory where the source code for the application exists. This allows to have buildable sources in directory other than root of repository. dockerfile string dockerfile is the raw contents of a Dockerfile which should be built. When this option is specified, the FROM may be modified based on your strategy base image and additional ENV stanzas from your strategy environment will be added after the FROM, but before the rest of your Dockerfile stanzas. The Dockerfile source type may be used with other options like git - in those cases the Git repo will have any innate Dockerfile replaced in the context dir. git object GitBuildSource defines the parameters of a Git SCM images array images describes a set of images to be used to provide source for the build images[] object ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). secrets array secrets represents a list of secrets and their destinations that will be used only for the build. secrets[] object SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. sourceSecret LocalObjectReference sourceSecret is the name of a Secret that would be used for setting up the authentication for cloning private repository. The secret contains valid credentials for remote repository, where the data's key represent the authentication method to be used and value is the base64 encoded credentials. Supported auth methods are: ssh-privatekey. type string type of build input to accept 2.1.11. .spec.source.binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 2.1.12. .spec.source.configMaps Description configMaps represents a list of configMaps and their destinations that will be used for the build. Type array 2.1.13. .spec.source.configMaps[] Description ConfigMapBuildSource describes a configmap and its destination directory that will be used only at the build time. The content of the configmap referenced here will be copied into the destination directory instead of mounting. Type object Required configMap Property Type Description configMap LocalObjectReference configMap is a reference to an existing configmap that you want to use in your build. destinationDir string destinationDir is the directory where the files from the configmap should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. 2.1.14. .spec.source.git Description GitBuildSource defines the parameters of a Git SCM Type object Required uri Property Type Description httpProxy string httpProxy is a proxy used to reach the git repository over http httpsProxy string httpsProxy is a proxy used to reach the git repository over https noProxy string noProxy is the list of domains for which the proxy should not be used ref string ref is the branch/tag/ref to build. uri string uri points to the source that will be built. The structure of the source will depend on the type of build to run 2.1.15. .spec.source.images Description images describes a set of images to be used to provide source for the build Type array 2.1.16. .spec.source.images[] Description ImageSource is used to describe build source that will be extracted from an image or used during a multi stage build. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. Image sources can either be used to extract content from an image and place it into the build context along with the repository source, or used directly during a multi-stage container image build to allow content to be copied without overwriting the contents of the repository source (see the 'paths' and 'as' fields). Type object Required from Property Type Description as array (string) A list of image names that this source will be used in place of during a multi-stage container image build. For instance, a Dockerfile that uses "COPY --from=nginx:latest" will first check for an image source that has "nginx:latest" in this field before attempting to pull directly. If the Dockerfile does not reference an image source it is ignored. This field and paths may both be set, in which case the contents will be used twice. from ObjectReference from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from. paths array paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. paths[] object ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. pullSecret LocalObjectReference pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set. 2.1.17. .spec.source.images[].paths Description paths is a list of source and destination paths to copy from the image. This content will be copied into the build context prior to starting the build. If no paths are set, the build context will not be altered. Type array 2.1.18. .spec.source.images[].paths[] Description ImageSourcePath describes a path to be copied from a source image and its destination within the build directory. Type object Required sourcePath destinationDir Property Type Description destinationDir string destinationDir is the relative directory within the build directory where files copied from the image are placed. sourcePath string sourcePath is the absolute path of the file or directory inside the image to copy to the build directory. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination. 2.1.19. .spec.source.secrets Description secrets represents a list of secrets and their destinations that will be used only for the build. Type array 2.1.20. .spec.source.secrets[] Description SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting. Type object Required secret Property Type Description destinationDir string destinationDir is the directory where the files from the secret should be available for the build time. For the Source build strategy, these will be injected into a container where the assemble script runs. Later, when the script finishes, all files injected will be truncated to zero length. For the container image build strategy, these will be copied into the build directory, where the Dockerfile is located, so users can ADD or COPY them during container image build. secret LocalObjectReference secret is a reference to an existing secret that you want to use in your build. 2.1.21. .spec.strategy Description BuildStrategy contains the details of how to perform a build. Type object Property Type Description customStrategy object CustomBuildStrategy defines input parameters specific to Custom build. dockerStrategy object DockerBuildStrategy defines input parameters specific to container image build. jenkinsPipelineStrategy object JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines sourceStrategy object SourceBuildStrategy defines input parameters specific to an Source build. type string type is the kind of build strategy. 2.1.22. .spec.strategy.customStrategy Description CustomBuildStrategy defines input parameters specific to Custom build. Type object Required from Property Type Description buildAPIVersion string buildAPIVersion is the requested API version for the Build object serialized and passed to the custom builder env array (EnvVar) env contains additional environment variables you want to pass into a builder container. exposeDockerSocket boolean exposeDockerSocket will allow running Docker commands (and build container images) from inside the container. forcePull boolean forcePull describes if the controller should configure the build pod to always pull the images for the builder or only pull if it is not present locally from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries secrets array secrets is a list of additional secrets that will be included in the build pod secrets[] object SecretSpec specifies a secret to be included in a build pod and its corresponding mount point 2.1.23. .spec.strategy.customStrategy.secrets Description secrets is a list of additional secrets that will be included in the build pod Type array 2.1.24. .spec.strategy.customStrategy.secrets[] Description SecretSpec specifies a secret to be included in a build pod and its corresponding mount point Type object Required secretSource mountPath Property Type Description mountPath string mountPath is the path at which to mount the secret secretSource LocalObjectReference secretSource is a reference to the secret 2.1.25. .spec.strategy.dockerStrategy Description DockerBuildStrategy defines input parameters specific to container image build. Type object Property Type Description buildArgs array (EnvVar) buildArgs contains build arguments that will be resolved in the Dockerfile. See https://docs.docker.com/engine/reference/builder/#/arg for more details. NOTE: Only the 'name' and 'value' fields are supported. Any settings on the 'valueFrom' field are ignored. dockerfilePath string dockerfilePath is the path of the Dockerfile that will be used to build the container image, relative to the root of the context (contextDir). Defaults to Dockerfile if unset. env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is a reference to an DockerImage, ImageStreamTag, or ImageStreamImage which overrides the FROM image in the Dockerfile for the build. If the Dockerfile uses multi-stage builds, this will replace the image in the last FROM directive of the file. imageOptimizationPolicy string imageOptimizationPolicy describes what optimizations the system can use when building images to reduce the final size or time spent building the image. The default policy is 'None' which means the final build image will be equivalent to an image created by the container image build API. The experimental policy 'SkipLayers' will avoid commiting new layers in between each image step, and will fail if the Dockerfile cannot provide compatibility with the 'None' policy. An additional experimental policy 'SkipLayersAndWarn' is the same as 'SkipLayers' but simply warns if compatibility cannot be preserved. noCache boolean noCache if set to true indicates that the container image build must be executed with the --no-cache=true flag pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 2.1.26. .spec.strategy.dockerStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 2.1.27. .spec.strategy.dockerStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 2.1.28. .spec.strategy.dockerStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 2.1.29. .spec.strategy.dockerStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 2.1.30. .spec.strategy.dockerStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 2.1.31. .spec.strategy.jenkinsPipelineStrategy Description JenkinsPipelineBuildStrategy holds parameters specific to a Jenkins Pipeline build. Deprecated: use OpenShift Pipelines Type object Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a build pipeline. jenkinsfile string Jenkinsfile defines the optional raw contents of a Jenkinsfile which defines a Jenkins pipeline build. jenkinsfilePath string JenkinsfilePath is the optional path of the Jenkinsfile that will be used to configure the pipeline relative to the root of the context (contextDir). If both JenkinsfilePath & Jenkinsfile are both not specified, this defaults to Jenkinsfile in the root of the specified contextDir. 2.1.32. .spec.strategy.sourceStrategy Description SourceBuildStrategy defines input parameters specific to an Source build. Type object Required from Property Type Description env array (EnvVar) env contains additional environment variables you want to pass into a builder container. forcePull boolean forcePull describes if the builder should pull the images from registry prior to building. from ObjectReference from is reference to an DockerImage, ImageStreamTag, or ImageStreamImage from which the container image should be pulled incremental boolean incremental flag forces the Source build to do incremental builds if true. pullSecret LocalObjectReference pullSecret is the name of a Secret that would be used for setting up the authentication for pulling the container images from the private Docker registries scripts string scripts is the location of Source scripts volumes array volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. 2.1.33. .spec.strategy.sourceStrategy.volumes Description volumes is a list of input volumes that can be mounted into the builds runtime environment. Only a subset of Kubernetes Volume sources are supported by builds. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 2.1.34. .spec.strategy.sourceStrategy.volumes[] Description BuildVolume describes a volume that is made available to build pods, such that it can be mounted into buildah's runtime environment. Only a subset of Kubernetes Volume sources are supported. Type object Required name source mounts Property Type Description mounts array mounts represents the location of the volume in the image build container mounts[] object BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. name string name is a unique identifier for this BuildVolume. It must conform to the Kubernetes DNS label standard and be unique within the pod. Names that collide with those added by the build controller will result in a failed build with an error message detailing which name caused the error. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names source object BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. 2.1.35. .spec.strategy.sourceStrategy.volumes[].mounts Description mounts represents the location of the volume in the image build container Type array 2.1.36. .spec.strategy.sourceStrategy.volumes[].mounts[] Description BuildVolumeMount describes the mounting of a Volume within buildah's runtime environment. Type object Required destinationPath Property Type Description destinationPath string destinationPath is the path within the buildah runtime environment at which the volume should be mounted. The transient mount within the build image and the backing volume will both be mounted read only. Must be an absolute path, must not contain '..' or ':', and must not collide with a destination path generated by the builder process Paths that collide with those added by the build controller will result in a failed build with an error message detailing which path caused the error. 2.1.37. .spec.strategy.sourceStrategy.volumes[].source Description BuildVolumeSource represents the source of a volume to mount Only one of its supported types may be specified at any given time. Type object Required type Property Type Description configMap ConfigMapVolumeSource configMap represents a ConfigMap that should populate this volume csi CSIVolumeSource csi represents ephemeral storage provided by external CSI drivers which support this capability secret SecretVolumeSource secret represents a Secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret type string type is the BuildVolumeSourceType for the volume source. Type must match the populated volume source. Valid types are: Secret, ConfigMap 2.1.38. .spec.triggers Description triggers determine how new Builds can be launched from a BuildConfig. If no triggers are defined, a new build can only occur as a result of an explicit client build creation. Type array 2.1.39. .spec.triggers[] Description BuildTriggerPolicy describes a policy for a single trigger that results in a new Build. Type object Required type Property Type Description bitbucket object WebHookTrigger is a trigger that gets invoked using a webhook type of post generic object WebHookTrigger is a trigger that gets invoked using a webhook type of post github object WebHookTrigger is a trigger that gets invoked using a webhook type of post gitlab object WebHookTrigger is a trigger that gets invoked using a webhook type of post imageChange object ImageChangeTrigger allows builds to be triggered when an ImageStream changes type string type is the type of build trigger. Valid values: - GitHub GitHubWebHookBuildTriggerType represents a trigger that launches builds on GitHub webhook invocations - Generic GenericWebHookBuildTriggerType represents a trigger that launches builds on generic webhook invocations - GitLab GitLabWebHookBuildTriggerType represents a trigger that launches builds on GitLab webhook invocations - Bitbucket BitbucketWebHookBuildTriggerType represents a trigger that launches builds on Bitbucket webhook invocations - ImageChange ImageChangeBuildTriggerType represents a trigger that launches builds on availability of a new version of an image - ConfigChange ConfigChangeBuildTriggerType will trigger a build on an initial build config creation WARNING: In the future the behavior will change to trigger a build on any config change 2.1.40. .spec.triggers[].bitbucket Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.41. .spec.triggers[].bitbucket.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.42. .spec.triggers[].generic Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.43. .spec.triggers[].generic.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.44. .spec.triggers[].github Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.45. .spec.triggers[].github.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.46. .spec.triggers[].gitlab Description WebHookTrigger is a trigger that gets invoked using a webhook type of post Type object Property Type Description allowEnv boolean allowEnv determines whether the webhook can set environment variables; can only be set to true for GenericWebHook. secret string secret used to validate requests. Deprecated: use SecretReference instead. secretReference object SecretLocalReference contains information that points to the local secret being used 2.1.47. .spec.triggers[].gitlab.secretReference Description SecretLocalReference contains information that points to the local secret being used Type object Required name Property Type Description name string Name is the name of the resource in the same namespace being referenced 2.1.48. .spec.triggers[].imageChange Description ImageChangeTrigger allows builds to be triggered when an ImageStream changes Type object Property Type Description from ObjectReference from is a reference to an ImageStreamTag that will trigger a build when updated It is optional. If no From is specified, the From image from the build strategy will be used. Only one ImageChangeTrigger with an empty From reference is allowed in a build configuration. lastTriggeredImageID string lastTriggeredImageID is used internally by the ImageChangeController to save last used image ID for build This field is deprecated and will be removed in a future release. Deprecated paused boolean paused is true if this trigger is temporarily disabled. Optional. 2.1.49. .status Description BuildConfigStatus contains current state of the build config object. Type object Required lastVersion Property Type Description imageChangeTriggers array ImageChangeTriggers captures the runtime state of any ImageChangeTrigger specified in the BuildConfigSpec, including the value reconciled by the OpenShift APIServer for the lastTriggeredImageID. There is a single entry in this array for each image change trigger in spec. Each trigger status references the ImageStreamTag that acts as the source of the trigger. imageChangeTriggers[] object ImageChangeTriggerStatus tracks the latest resolved status of the associated ImageChangeTrigger policy specified in the BuildConfigSpec.Triggers struct. lastVersion integer lastVersion is used to inform about number of last triggered build. 2.1.50. .status.imageChangeTriggers Description ImageChangeTriggers captures the runtime state of any ImageChangeTrigger specified in the BuildConfigSpec, including the value reconciled by the OpenShift APIServer for the lastTriggeredImageID. There is a single entry in this array for each image change trigger in spec. Each trigger status references the ImageStreamTag that acts as the source of the trigger. Type array 2.1.51. .status.imageChangeTriggers[] Description ImageChangeTriggerStatus tracks the latest resolved status of the associated ImageChangeTrigger policy specified in the BuildConfigSpec.Triggers struct. Type object Property Type Description from object ImageStreamTagReference references the ImageStreamTag in an image change trigger by namespace and name. lastTriggerTime Time lastTriggerTime is the last time this particular ImageStreamTag triggered a Build to start. This field is only updated when this trigger specifically started a Build. lastTriggeredImageID string lastTriggeredImageID represents the sha/id of the ImageStreamTag when a Build for this BuildConfig was started. The lastTriggeredImageID is updated each time a Build for this BuildConfig is started, even if this ImageStreamTag is not the reason the Build is started. 2.1.52. .status.imageChangeTriggers[].from Description ImageStreamTagReference references the ImageStreamTag in an image change trigger by namespace and name. Type object Property Type Description name string name is the name of the ImageStreamTag for an ImageChangeTrigger namespace string namespace is the namespace where the ImageStreamTag for an ImageChangeTrigger is located 2.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/buildconfigs GET : list or watch objects of kind BuildConfig /apis/build.openshift.io/v1/watch/buildconfigs GET : watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs DELETE : delete collection of BuildConfig GET : list or watch objects of kind BuildConfig POST : create a BuildConfig /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs GET : watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name} DELETE : delete a BuildConfig GET : read the specified BuildConfig PATCH : partially update the specified BuildConfig PUT : replace the specified BuildConfig /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs/{name} GET : watch changes to an object of kind BuildConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/build.openshift.io/v1/buildconfigs Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind BuildConfig Table 2.2. HTTP responses HTTP code Reponse body 200 - OK BuildConfigList schema 401 - Unauthorized Empty 2.2.2. /apis/build.openshift.io/v1/watch/buildconfigs Table 2.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 2.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs Table 2.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BuildConfig Table 2.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.8. Body parameters Parameter Type Description body DeleteOptions schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind BuildConfig Table 2.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK BuildConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a BuildConfig Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body BuildConfig schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 202 - Accepted BuildConfig schema 401 - Unauthorized Empty 2.2.4. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs Table 2.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of BuildConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 2.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name} Table 2.18. Global path parameters Parameter Type Description name string name of the BuildConfig namespace string object name and auth scope, such as for teams and projects Table 2.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BuildConfig Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.21. Body parameters Parameter Type Description body DeleteOptions schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BuildConfig Table 2.23. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BuildConfig Table 2.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.25. Body parameters Parameter Type Description body Patch schema Table 2.26. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BuildConfig Table 2.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.28. Body parameters Parameter Type Description body BuildConfig schema Table 2.29. HTTP responses HTTP code Reponse body 200 - OK BuildConfig schema 201 - Created BuildConfig schema 401 - Unauthorized Empty 2.2.6. /apis/build.openshift.io/v1/watch/namespaces/{namespace}/buildconfigs/{name} Table 2.30. Global path parameters Parameter Type Description name string name of the BuildConfig namespace string object name and auth scope, such as for teams and projects Table 2.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind BuildConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
|
[
"\"postCommit\": { \"script\": \"rake test --verbose\", }",
"The above is a convenient form which is equivalent to:",
"\"postCommit\": { \"command\": [\"/bin/sh\", \"-ic\"], \"args\": [\"rake test --verbose\"] }",
"\"postCommit\": { \"commit\": [\"rake\", \"test\", \"--verbose\"] }",
"Command overrides the image entrypoint in the exec form, as documented in Docker: https://docs.docker.com/engine/reference/builder/#entrypoint.",
"\"postCommit\": { \"args\": [\"rake\", \"test\", \"--verbose\"] }",
"This form is only useful if the image entrypoint can handle arguments.",
"\"postCommit\": { \"script\": \"rake test USD1\", \"args\": [\"--verbose\"] }",
"This form is useful if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, USD0 will be \"/bin/sh\" and USD1, USD2, etc, are the positional arguments from Args.",
"\"postCommit\": { \"command\": [\"rake\", \"test\"], \"args\": [\"--verbose\"] }",
"This form is equivalent to appending the arguments to the Command slice."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/buildconfig-build-openshift-io-v1
|
Index
|
Index A ACPI configuring, Configuring ACPI For Use with Integrated Fence Devices APC power switch over SNMP fence device , APC Power Switch over SNMP APC power switch over telnet/SSH fence device , APC Power Switch over Telnet and SSH B Brocade fabric switch fence device , Brocade Fabric Switch C CISCO MDS fence device , Cisco MDS Cisco UCS fence device , Cisco UCS cluster administration configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices D Dell DRAC 5 fence device , Dell Drac 5 Dell iDRAC fence device , IPMI over LAN E Eaton network power switch, Eaton Network Power Switch Egenera BladeFrame fence device , Egenera BladeFrame Emerson network power switch fence device , Emerson Network Power Switch (SNMP interface) ePowerSwitch fence device , ePowerSwitch F fence configuration, Fencing Pre-Configuration devices, Fence Devices fence agent fence_apc, APC Power Switch over Telnet and SSH fence_apc_snmp, APC Power Switch over SNMP fence_bladecenter, IBM BladeCenter fence_brocade, Brocade Fabric Switch fence_cisco_mds, Cisco MDS fence_cisco_ucs, Cisco UCS fence_drac5, Dell Drac 5 fence_eaton_snmp, Eaton Network Power Switch fence_egenera, Egenera BladeFrame fence_emerson, Emerson Network Power Switch (SNMP interface) fence_eps, ePowerSwitch fence_hpblade, Hewlett-Packard BladeSystem fence_ibmblade, IBM BladeCenter over SNMP fence_idrac, IPMI over LAN fence_ifmib, IF-MIB fence_ilo, Hewlett-Packard iLO fence_ilo2, Hewlett-Packard iLO fence_ilo3, IPMI over LAN fence_ilo3_ssh, HP iLO over SSH fence_ilo4, IPMI over LAN fence_ilo4_ssh, HP iLO over SSH fence_ilo_moonshot, HP Moonshot iLO fence_ilo_mp, Hewlett-Packard iLO MP fence_ilo_ssh, HP iLO over SSH fence_imm, IPMI over LAN fence_intelmodular, Intel Modular fence_ipdu, IBM iPDU fence_ipmilan, IPMI over LAN fence_kdump, Fence kdump fence_mpath, Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) fence_rhevm, RHEV-M REST API fence_rsb, Fujitsu-Siemens RemoteView Service Board (RSB) fence_scsi, SCSI Persistent Reservations fence_virt, Fence Virt (Serial/VMChannel Mode) fence_vmware_soap, VMware over SOAP API fence_wti, WTI Power Switch fence_xvm, Fence Virt (Multicast Mode) fence configuration, Fencing Pre-Configuration , Configuring Fencing with Conga SELinux, SELinux fence device APC power switch over SNMP, APC Power Switch over SNMP APC power switch over telnet/SSH, APC Power Switch over Telnet and SSH Brocade fabric switch, Brocade Fabric Switch Cisco MDS, Cisco MDS Cisco UCS, Cisco UCS Dell DRAC 5, Dell Drac 5 Dell iDRAC, IPMI over LAN Eaton network power switch, Eaton Network Power Switch Egenera BladeFrame, Egenera BladeFrame Emerson network power switch, Emerson Network Power Switch (SNMP interface) ePowerSwitch, ePowerSwitch Fence virt, Fence Virt (Serial/VMChannel Mode) Fence virt (Multicast Mode), Fence Virt (Multicast Mode) Fujitsu Siemens RemoteView Service Board (RSB), Fujitsu-Siemens RemoteView Service Board (RSB) HP BladeSystem, Hewlett-Packard BladeSystem HP iLO, Hewlett-Packard iLO HP iLO MP, Hewlett-Packard iLO MP HP iLO over SSH, HP iLO over SSH HP iLO2, Hewlett-Packard iLO HP iLO3, IPMI over LAN HP iLO3 over SSH, HP iLO over SSH HP iLO4, IPMI over LAN HP iLO4 over SSH, HP iLO over SSH HP Moonshot iLO, HP Moonshot iLO IBM BladeCenter, IBM BladeCenter IBM BladeCenter SNMP, IBM BladeCenter over SNMP IBM Integrated Management Module, IPMI over LAN IBM iPDU, IBM iPDU IF MIB, IF-MIB Intel Modular, Intel Modular IPMI LAN, IPMI over LAN multipath persistent reservation fencing, Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) RHEV-M REST API, RHEV-M REST API SCSI fencing, SCSI Persistent Reservations VMware (SOAP interface), VMware over SOAP API WTI power switch, WTI Power Switch fence devices, Fence Devices Fence virt fence device , Fence Virt (Serial/VMChannel Mode) , Fence Virt (Multicast Mode) fence_apc fence agent, APC Power Switch over Telnet and SSH fence_apc_snmp fence agent, APC Power Switch over SNMP fence_bladecenter fence agent, IBM BladeCenter fence_brocade fence agent, Brocade Fabric Switch fence_cisco_mds fence agent, Cisco MDS fence_cisco_ucs fence agent, Cisco UCS fence_drac5 fence agent, Dell Drac 5 fence_eaton_snmp fence agent, Eaton Network Power Switch fence_egenera fence agent, Egenera BladeFrame fence_emerson fence agent, Emerson Network Power Switch (SNMP interface) fence_eps fence agent, ePowerSwitch fence_hpblade fence agent, Hewlett-Packard BladeSystem fence_ibmblade fence agent, IBM BladeCenter over SNMP fence_idrac fence agent, IPMI over LAN fence_ifmib fence agent, IF-MIB fence_ilo fence agent, Hewlett-Packard iLO fence_ilo2 fence agent, Hewlett-Packard iLO fence_ilo3 fence agent, IPMI over LAN fence_ilo3_ssh fence agent, HP iLO over SSH fence_ilo4 fence agent, IPMI over LAN fence_ilo4_ssh fence agent, HP iLO over SSH fence_ilo_moonshot fence agent, HP Moonshot iLO fence_ilo_mp fence agent, Hewlett-Packard iLO MP fence_ilo_ssh fence agent, HP iLO over SSH fence_imm fence agent, IPMI over LAN fence_intelmodular fence agent, Intel Modular fence_ipdu fence agent, IBM iPDU fence_ipmilan fence agent, IPMI over LAN fence_kdump fence agent, Fence kdump fence_mpath fence agent, Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) fence_rhevm fence agent, RHEV-M REST API fence_rsb fence agent, Fujitsu-Siemens RemoteView Service Board (RSB) fence_scsi fence agent, SCSI Persistent Reservations fence_virt fence agent, Fence Virt (Serial/VMChannel Mode) fence_vmware_soap fence agent, VMware over SOAP API fence_wti fence agent, WTI Power Switch fence_xvm fence agent, Fence Virt (Multicast Mode) fencing configuration, Configuring Fencing with the ccs Command , Configuring Fencing with Conga fencing configuration, Configuring Fencing with the ccs Command Fujitsu Siemens RemoteView Service Board (RSB) fence device, Fujitsu-Siemens RemoteView Service Board (RSB) H HP Bladesystem fence device , Hewlett-Packard BladeSystem HP iLO fence device, Hewlett-Packard iLO HP iLO MP fence device , Hewlett-Packard iLO MP HP iLO over SSH fence device, HP iLO over SSH HP iLO2 fence device, Hewlett-Packard iLO HP iLO3 fence device, IPMI over LAN HP iLO3 over SSH fence device, HP iLO over SSH HP iLO4 fence device, IPMI over LAN HP iLO4 over SSH fence device, HP iLO over SSH HP Moonshot iLO fence device, HP Moonshot iLO I IBM BladeCenter fence device , IBM BladeCenter IBM BladeCenter SNMP fence device , IBM BladeCenter over SNMP IBM Integrated Management Module fence device , IPMI over LAN IBM iPDU fence device , IBM iPDU IF MIB fence device , IF-MIB integrated fence devices configuring ACPI, Configuring ACPI For Use with Integrated Fence Devices Intel Modular fence device , Intel Modular IPMI LAN fence device , IPMI over LAN M multipath persistent reservation fence device , Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) R RHEV-M REST API fence device , RHEV-M REST API S SCSI fencing, SCSI Persistent Reservations SELinux configuring, SELinux T tables fence devices, parameters, Fence Devices V VMware (SOAP interface) fence device , VMware over SOAP API W WTI power switch fence device , WTI Power Switch
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ix01
|
Deploying installer-provisioned clusters on bare metal
|
Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.14 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/index
|
Chapter 5. Debezium logging
|
Chapter 5. Debezium logging Debezium has extensive logging built into its connectors, and you can change the logging configuration to control which of these log statements appear in the logs and where those logs are sent. Debezium (as well as Kafka, Kafka Connect, and Zookeeper) use the Log4j logging framework for Java. By default, the connectors produce a fair amount of useful information when they start up, but then produce very few logs when the connector is keeping up with the source databases. This is often sufficient when the connector is operating normally, but may not be enough when the connector is behaving unexpectedly. In such cases, you can change the logging level so that the connector generates much more verbose log messages describing what the connector is doing and what it is not doing. 5.1. Debezium logging concepts Before configuring logging, you should understand what Log4J loggers , log levels , and appenders are. Loggers Each log message produced by the application is sent to a specific logger (for example, io.debezium.connector.mysql ). Loggers are arranged in hierarchies. For example, the io.debezium.connector.mysql logger is the child of the io.debezium.connector logger, which is the child of the io.debezium logger. At the top of the hierarchy, the root logger defines the default logger configuration for all of the loggers beneath it. Log levels Every log message produced by the application also has a specific log level : ERROR - errors, exceptions, and other significant problems WARN - potential problems and issues INFO - status and general activity (usually low-volume) DEBUG - more detailed activity that would be useful in diagnosing unexpected behavior TRACE - very verbose and detailed activity (usually very high-volume) Appenders An appender is essentially a destination where log messages are written. Each appender controls the format of its log messages, giving you even more control over what the log messages look like. To configure logging, you specify the desired level for each logger and the appender(s) where those log messages should be written. Since loggers are hierarchical, the configuration for the root logger serves as a default for all of the loggers below it, although you can override any child (or descendant) logger. 5.2. Default Debezium logging configuration If you are running Debezium connectors in a Kafka Connect process, then Kafka Connect uses the Log4j configuration file (for example, /opt/kafka/config/connect-log4j.properties ) in the Kafka installation. By default, this file contains the following configuration: Example 5.1. Default configuration in connect-log4j.properties log4j.rootLogger=INFO, stdout 1 log4j.appender.stdout=org.apache.log4j.ConsoleAppender 2 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 3 log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n 4 ... Table 5.1. Descriptions of default connect-log4j.properties settings Property Description 1 The root logger, which defines the default logger configuration. By default, loggers include INFO , WARN , and ERROR messages. These log messages are written to the stdout appender. 2 Directs the stdout appender to write log messages to the console, as opposed to a file. 3 Specifies that the stdout appender uses a pattern matching algorithm to format log messages. 4 The pattern that the stdout appender uses (see the Log4j documentation for details). Unless you configure other loggers, all of the loggers that Debezium uses inherit the rootLogger configuration. 5.3. Configuring Debezium logging By default, Debezium connectors write all INFO , WARN , and ERROR messages to the console. You can change the default logging configuration by using one of the following methods: Setting the logging level by configuring loggers Dynamically setting the logging level with the Kafka Connect REST API Setting the logging level by adding mapped diagnostic contexts Note There are other methods that you can use to configure Debezium logging with Log4j. For more information, search for tutorials about setting up and using appenders to send log messages to specific destinations. 5.3.1. Changing the Debezium logging level by configuring loggers The default Debezium logging level provides sufficient information to show whether a connector is healthy or not. However, if a connector is not healthy, you can change its logging level to troubleshoot the issue. In general, Debezium connectors send their log messages to loggers with names that match the fully-qualified name of the Java class that is generating the log message. Debezium uses packages to organize code with similar or related functions. This means that you can control all of the log messages for a specific class or for all of the classes within or under a specific package. Procedure Open the connect-log4j.properties file. Configure a logger for the connector. The following example configures loggers for the MySQL connector and for the database schema history implementation used by the connector, and sets them to log DEBUG level messages: Example 5.2. connect-log4j.properties configuration to enable loggers and set the log level to DEBUG ... log4j.logger.io.debezium.connector.mysql=DEBUG, stdout 1 log4j.logger.io.debezium.relational.history=DEBUG, stdout 2 log4j.additivity.io.debezium.connector.mysql=false 3 log4j.additivity.io.debezium.storage.kafka.history=false ... Table 5.2. Descriptions of connect-log4j.properties settings for enabling loggers and setting the log level Property Description 1 Configures the logger named io.debezium.connector.mysql to send DEBUG , INFO , WARN , and ERROR messages to the stdout appender. 2 Configures the logger named io.debezium.relational.history to send DEBUG , INFO , WARN , and ERROR messages to the stdout appender. 3 This pair of log4j.additivity.io entries disable additivity . If you use multiple appenders, set additivity values to false to prevent duplicate log messages from being sent to the appenders of the parent loggers. If necessary, change the logging level for a specific subset of the classes within the connector. Increasing the logging level for the entire connector increases the log verbosity, which can make it difficult to understand what is happening. In these cases, you can change the logging level just for the subset of classes that are related to the issue that you are troubleshooting. Set the connector's logging level to either DEBUG or TRACE . Review the connector's log messages. Find the log messages that are related to the issue that you are troubleshooting. The end of each log message shows the name of the Java class that produced the message. Set the connector's logging level back to INFO . Configure a logger for each Java class that you identified. For example, consider a scenario in which you are unsure why the MySQL connector is skipping some events when it is processing the binlog. Rather than turn on DEBUG or TRACE logging for the entire connector, you can keep the connector's logging level at INFO and then configure DEBUG or TRACE on just the class that is reading the binlog: Example 5.3. connect-log4j.properties configuration that enables DEBUG logging for the BinlogReader class ... log4j.logger.io.debezium.connector.mysql=INFO, stdout log4j.logger.io.debezium.connector.mysql.BinlogReader=DEBUG, stdout log4j.logger.io.debezium.relational.history=INFO, stdout log4j.additivity.io.debezium.connector.mysql=false log4j.additivity.io.debezium.storage.kafka.history=false log4j.additivity.io.debezium.connector.mysql.BinlogReader=false ... 5.3.2. Dynamically changing the Debezium logging level with the Kafka Connect API You can use the Kafka Connect REST API to set logging levels for a connector dynamically at runtime. Unlike log level changes that you set in connect-log4j.properties , changes that you make via the API take effect immediately, and do not require you to restart the worker. The log level setting that you specify in the API applies only to the worker at the endpoint that receives the request. The log levels of other workers in the cluster remain unchanged. The specified level is not persisted after the worker restarts. To make persistent changes to the logging level, set the log level in connect-log4j.properties by configuring loggers or adding mapped diagnostic contexts . Procedure Set the log level by sending a PUT request to the admin/loggers endpoint that specifies the following information: The package for which you want to change the log level. The log level that you want to set. curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.connector. <connector_package> -d '{"level": " <log_level> "}' For example, to log debug information for a Debezium MySQL connector, send the following request to Kafka Connect: curl -s -X PUT -H "Content-Type:application/json" http://localhost:8083/admin/loggers/io.debezium.connector.mysql -d '{"level": "DEBUG"}' 5.3.3. Changing the Debezium logging levely by adding mapped diagnostic contexts Most Debezium connectors (and the Kafka Connect workers) use multiple threads to perform different activities. This can make it difficult to look at a log file and find only those log messages for a particular logical activity. To make the log messages easier to find, Debezium provides several mapped diagnostic contexts (MDC) that provide additional information for each thread. Debezium provides the following MDC properties: dbz.connectorType A short alias for the type of connector. For example, MySql , Mongo , Postgres , and so on. All threads associated with the same type of connector use the same value, so you can use this to find all log messages produced by a given type of connector. dbz.connectorName The name of the connector or database server as defined in the connector's configuration. For example products , serverA , and so on. All threads associated with a specific connector instance use the same value, so you can find all of the log messages produced by a specific connector instance. dbz.connectorContext A short name for an activity running as a separate thread running within the connector's task. For example, main , binlog , snapshot , and so on. In some cases, when a connector assigns threads to specific resources (such as a table or collection), the name of that resource could be used instead. Each thread associated with a connector would use a distinct value, so you can find all of the log messages associated with this particular activity. To enable MDC for a connector, you configure an appender in the connect-log4j.properties file. Procedure Open the connect-log4j.properties file. Configure an appender to use any of the supported Debezium MDC properties. In the following example, the stdout appender is configured to use these MDC properties. Example 5.4. connect-log4j.properties configuration that sets the stdout appender to use MDC properties ... log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %X{dbz.connectorType}|%X{dbz.connectorName}|%X{dbz.connectorContext} %m [%c]%n ... The configuration in the preceding example produces log messages similar to the ones in the following output: ... 2017-02-07 20:49:37,692 INFO MySQL|dbserver1|snapshot Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=convertToNull with user 'debezium' [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,696 INFO MySQL|dbserver1|snapshot Snapshot is using user 'debezium' with these MySQL grants: [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,697 INFO MySQL|dbserver1|snapshot GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium'@'%' [io.debezium.connector.mysql.SnapshotReader] ... Each line in the log includes the connector type (for example, MySQL ), the name of the connector (for example, dbserver1 ), and the activity of the thread (for example, snapshot ). 5.4. Debezium logging on OpenShift If you are using Debezium on OpenShift, you can use the Kafka Connect loggers to configure the Debezium loggers and logging levels. For more information about configuring logging properties in a Kafka Connect schema, see Deploying and Managing Streams for Apache Kafka on OpenShift .
|
[
"log4j.rootLogger=INFO, stdout 1 log4j.appender.stdout=org.apache.log4j.ConsoleAppender 2 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 3 log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n 4",
"log4j.logger.io.debezium.connector.mysql=DEBUG, stdout 1 log4j.logger.io.debezium.relational.history=DEBUG, stdout 2 log4j.additivity.io.debezium.connector.mysql=false 3 log4j.additivity.io.debezium.storage.kafka.history=false",
"log4j.logger.io.debezium.connector.mysql=INFO, stdout log4j.logger.io.debezium.connector.mysql.BinlogReader=DEBUG, stdout log4j.logger.io.debezium.relational.history=INFO, stdout log4j.additivity.io.debezium.connector.mysql=false log4j.additivity.io.debezium.storage.kafka.history=false log4j.additivity.io.debezium.connector.mysql.BinlogReader=false",
"curl -s -X PUT -H \"Content-Type:application/json\" http://localhost:8083/admin/loggers/io.debezium.connector. <connector_package> -d '{\"level\": \" <log_level> \"}'",
"curl -s -X PUT -H \"Content-Type:application/json\" http://localhost:8083/admin/loggers/io.debezium.connector.mysql -d '{\"level\": \"DEBUG\"}'",
"log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %X{dbz.connectorType}|%X{dbz.connectorName}|%X{dbz.connectorContext} %m [%c]%n",
"2017-02-07 20:49:37,692 INFO MySQL|dbserver1|snapshot Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=convertToNull with user 'debezium' [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,696 INFO MySQL|dbserver1|snapshot Snapshot is using user 'debezium' with these MySQL grants: [io.debezium.connector.mysql.SnapshotReader] 2017-02-07 20:49:37,697 INFO MySQL|dbserver1|snapshot GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium'@'%' [io.debezium.connector.mysql.SnapshotReader]"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/debezium-logging
|
Chapter 94. Additional resources
|
Chapter 94. Additional resources Getting started with case management Getting started with decision services Designing a decision service using DMN models Developing Solvers with Red Hat Process Automation Manager Predictions 2019: Expect A Pragmatic Vision Of AI
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/additional_resources_4
|
Chapter 2. Configuring an AWS account
|
Chapter 2. Configuring an AWS account Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account. 2.1. Configuring Route 53 To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route 53 service. This zone must be authoritative for the domain. The Route 53 service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source. Note If you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation. If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation. Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation. Update the registrar records for the AWS Route 53 name servers that your domain uses. For example, if you registered your domain to a Route 53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records . If you are using a subdomain, add its delegation records to the parent domain. This gives Amazon Route 53 responsibility for the subdomain. Follow the delegation procedure outlined by the DNS provider of the parent domain. See Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain in the AWS documentation for an example high level procedure. 2.1.1. Ingress Operator endpoint configuration for AWS Route 53 If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region, the Ingress Operator uses us-gov-west-1 region for Route53 and tagging API clients. The Ingress Operator uses https://tagging.us-gov-west-1.amazonaws.com as the tagging API endpoint if a tagging custom endpoint is configured that includes the string 'us-gov-east-1'. For more information on AWS GovCloud (US) endpoints, see the Service Endpoints in the AWS documentation about GovCloud (US). Important Private, disconnected installations are not supported for AWS GovCloud when you install in the us-gov-east-1 region. Example Route 53 configuration platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2 1 Route 53 defaults to https://route53.us-gov.amazonaws.com for both AWS GovCloud (US) regions. 2 Only the US-West region has endpoints for tagging. Omit this parameter if your cluster is in another region. 2.2. AWS account limits The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for your AWS account. The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of clusters available by default Default AWS limit Description Instance Limits Varies Varies By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane nodes Three worker nodes These instance type counts are within a new account's default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need. In most regions, the worker machines use an m6i.large instance and the bootstrap and control plane machines use m6i.xlarge instances. In some regions, including all regions that do not support these instance types, m5.large and m5.xlarge instances are used instead. Elastic IPs (EIPs) 0 to 1 5 EIPs per account To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region . Each private subnet requires a NAT Gateway , and each NAT gateway requires a separate elastic IP . Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important To use the us-east-1 region, you must increase the EIP limit for your account. Virtual Private Clouds (VPCs) 5 5 VPCs per region Each cluster creates its own VPC. Elastic Load Balancing (ELB/NLB) 3 20 per region By default, each cluster creates internal and external network load balancers for the master API server and a single Classic Load Balancer for the router. Deploying more Kubernetes Service objects with type LoadBalancer will create additional load balancers . NAT Gateways 5 5 per availability zone The cluster deploys one NAT gateway in each availability zone. Elastic Network Interfaces (ENIs) At least 12 350 per region The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the us-east-1 region contains six availability zones, so a cluster that is deployed in that zone uses 27 ENIs. Review the AWS region map to determine how many availability zones are in each region. Additional ENIs are created for additional machines and ELB load balancers that are created by cluster usage and deployed workloads. VPC Gateway 20 20 per account Each cluster creates a single VPC Gateway for S3 access. S3 buckets 99 100 buckets per account Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. Security Groups 250 2,500 per account Each cluster creates 10 distinct security groups. 2.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 2.1. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 2.2. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 2.3. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 2.4. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 2.5. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 2.6. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 2.7. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 2.8. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 2.9. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 2.10. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 2.11. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 2.12. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 2.13. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 2.14. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole 2.4. Creating an IAM user Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account. Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options: Procedure Specify the IAM user name and select Programmatic access . Attach the AdministratorAccess policy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require. Note While it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components. Optional: Add metadata to the user by attaching tags. Confirm that the user name that you specified is granted the AdministratorAccess policy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program. Important You cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. 2.5. IAM Policies and AWS authentication By default, the installation program creates instance profiles for the bootstrap, control plane, and compute instances with the necessary permissions for the cluster to operate. Note To enable pulling images from the Amazon Elastic Container Registry (ECR) as a postinstallation task in a single-node OpenShift cluster, you must add the AmazonEC2ContainerRegistryReadOnly policy to the IAM role associated with the cluster's control plane role. However, you can create your own IAM roles and specify them as part of the installation process. You might need to specify your own roles to deploy the cluster or to manage the cluster after installation. For example: Your organization's security policies require that you use a more restrictive set of permissions to install the cluster. After the installation, the cluster is configured with an Operator that requires access to additional services. If you choose to specify your own IAM roles, you can take the following steps: Begin with the default policies and adapt as required. For more information, see "Default permissions for IAM instance profiles". Use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template that is based on the cluster's activity. For more information see, "Using AWS IAM Analyzer to create policy templates". 2.5.1. Default permissions for IAM instance profiles By default, the installation program creates IAM instance profiles for the bootstrap, control plane and worker instances with the necessary permissions for the cluster to operate. The following lists specify the default permissions for control plane and compute machines: Example 2.15. Default IAM role permissions for control plane instance profiles ec2:AttachVolume ec2:AuthorizeSecurityGroupIngress ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteVolume ec2:Describe* ec2:DetachVolume ec2:ModifyInstanceAttribute ec2:ModifyVolume ec2:RevokeSecurityGroupIngress elasticloadbalancing:AddTags elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerPolicy elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:DeleteListener elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeleteLoadBalancerListeners elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:Describe* elasticloadbalancing:DetachLoadBalancerFromSubnets elasticloadbalancing:ModifyListener elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer elasticloadbalancing:SetLoadBalancerPoliciesOfListener kms:DescribeKey Example 2.16. Default IAM role permissions for compute instance profiles ec2:DescribeInstances ec2:DescribeRegions 2.5.2. Specifying an existing IAM role Instead of allowing the installation program to create IAM instance profiles with the default permissions, you can use the install-config.yaml file to specify an existing IAM role for control plane and compute instances. Prerequisites You have an existing install-config.yaml file. Procedure Update compute.platform.aws.iamRole with an existing role for the compute machines. Sample install-config.yaml file with an IAM role for compute instances compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole Update controlPlane.platform.aws.iamRole with an existing role for the control plane machines. Sample install-config.yaml file with an IAM role for control plane instances controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole Save the file and reference it when installing the OpenShift Container Platform cluster. Note To change or update an IAM account after the cluster has been installed, see RHOCP 4 AWS cloud-credentials access key is expired (Red Hat Knowledgebase). Additional resources See Deploying the cluster . 2.5.3. Using AWS IAM Analyzer to create policy templates The minimal set of permissions that the control plane and compute instance profiles require depends on how the cluster is configured for its daily operation. One way to determine which permissions the cluster instances require is to use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template: A policy template contains the permissions the cluster has used over a specified period of time. You can then use the template to create policies with fine-grained permissions. Procedure The overall process could be: Ensure that CloudTrail is enabled. CloudTrail records all of the actions and events in your AWS account, including the API calls that are required to create a policy template. For more information, see the AWS documentation for working with CloudTrail . Create an instance profile for control plane instances and an instance profile for compute instances. Be sure to assign each role a permissive policy, such as PowerUserAccess. For more information, see the AWS documentation for creating instance profile roles . Install the cluster in a development environment and configure it as required. Be sure to deploy all of applications the cluster will host in a production environment. Test the cluster thoroughly. Testing the cluster ensures that all of the required API calls are logged. Use the IAM Access Analyzer to create a policy template for each instance profile. For more information, see the AWS documentation for generating policies based on the CloudTrail logs . Create and add a fine-grained policy to each instance profile. Remove the permissive policy from each instance profile. Deploy a production cluster using the existing instance profiles with the new policies. Note You can add IAM Conditions to your policy to make it more restrictive and compliant with your organization security requirements. 2.6. Supported AWS Marketplace regions Installing an OpenShift Container Platform cluster using an AWS Marketplace image is available to customers who purchase the offer in North America. While the offer must be purchased in North America, you can deploy the cluster to any of the following supported paritions: Public GovCloud Note Deploying a OpenShift Container Platform cluster using an AWS Marketplace image is not supported for the AWS secret regions or China regions. 2.7. Supported AWS regions You can deploy an OpenShift Container Platform cluster to the following regions. Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. 2.7.1. AWS public regions The following AWS public regions are supported: af-south-1 (Cape Town) ap-east-1 (Hong Kong) ap-northeast-1 (Tokyo) ap-northeast-2 (Seoul) ap-northeast-3 (Osaka) ap-south-1 (Mumbai) ap-south-2 (Hyderabad) ap-southeast-1 (Singapore) ap-southeast-2 (Sydney) ap-southeast-3 (Jakarta) ap-southeast-4 (Melbourne) ca-central-1 (Central) ca-west-1 (Calgary) eu-central-1 (Frankfurt) eu-central-2 (Zurich) eu-north-1 (Stockholm) eu-south-1 (Milan) eu-south-2 (Spain) eu-west-1 (Ireland) eu-west-2 (London) eu-west-3 (Paris) me-central-1 (UAE) me-south-1 (Bahrain) sa-east-1 (Sao Paulo) us-east-1 (N. Virginia) us-east-2 (Ohio) us-west-1 (N. California) us-west-2 (Oregon) 2.7.2. AWS GovCloud regions The following AWS GovCloud regions are supported: us-gov-west-1 us-gov-east-1 2.7.3. AWS SC2S and C2S secret regions The following AWS secret regions are supported: us-isob-east-1 Secret Commercial Cloud Services (SC2S) us-iso-east-1 Commercial Cloud Services (C2S) 2.7.4. AWS China regions The following AWS China regions are supported: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 2.8. steps Install an OpenShift Container Platform cluster: Quickly install a cluster with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates Installing a cluster on AWS with remote workers on AWS Outposts
|
[
"platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2",
"compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole",
"controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-aws-account
|
Chapter 3. Installing a cluster on OpenStack with customizations
|
Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.16 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 3.2. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 3.3. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Configuring a cluster with dual-stack networking You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the conversion of an IPv4 single-stack cluster to a dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. Prerequisites You enabled Dynamic Host Configuration Protocol (DHCP) on the subnets. Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for the ipv6-ra-mode and ipv6-address-mode fields are: dhcpv6-stateful , dhcpv6-stateless , and slaac . Note The dual-stack network MTU must accommodate both the minimum MTU for IPv6, which is 1280 , and the OVN-Kubernetes encapsulation overhead, which is 100 . Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. Choose one of the following install-config.yaml configurations: For an IPv4/IPv6 dual-stack cluster where you set IPv4 as the primary endpoint for your cluster nodes, edit the install-config.yaml file in a similar way to the following example: apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that all of the nodes across the cluster use for their networking needs. 7 The Classless Inter-Domain Routing (CIDR) of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. For an IPv6/IPv4 dual-stack cluster where you set IPv6 as the primary endpoint for your cluster nodes, edit the install-config.yaml file in a similar way to the following example: apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "fd2e:6f44:5dd8:c956::/64" - cidr: "192.168.25.0/24" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that all the nodes across the cluster use for their networking needs. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Optional: When you use an installation host in an isolated dual-stack network, the IPv6 address might not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 8, complete the following steps: Create a file called /etc/NetworkManager/system-connections/required-rhel8-ipv6.conf that includes the following configuration: [connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto Reboot the installation host. Optional: When you use an installation host in an isolated dual-stack network, the IPv6 address might not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 9, complete the following steps: Create a file called /etc/NetworkManager/conf.d/required-rhel9-ipv6.conf that includes the following configuration: [connection] ipv6.addr-gen-mode=0 Reboot the installation host. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
|
[
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id",
"[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto",
"[connection] ipv6.addr-gen-mode=0",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/installing-openstack-installer-custom
|
Chapter 2. Configure User Access to manage notifications
|
Chapter 2. Configure User Access to manage notifications To configure notifications settings, you must be a member of a group with the Notifications administrator role. This group must be configured in User Access by an Organization Administrator. In the Red Hat Hybrid Cloud Console > Settings > Identity & Access Management > User Access > Groups , an Organization Administrator performs the following high-level steps: Create a User Access group for Notifications administrators. Add the Notifications administrator role to the group. Add members (users with account access) to the group. Organization Administrator The Organization Administrator configures the User Access group for Notifications administrators, then adds the Notifications administrator role and users to the group. Notifications administrator Notifications administrators configure how services interact with notifications. Notifications administrators configure behavior groups to define how services notify users about events. Administrators can configure additional integrations as they become available, as well as edit, disable, and remove existing integrations. Notifications viewer The Notifications viewer role is automatically granted to everyone on the account and limits how a user can interact with notifications service views and configurations. A viewer can view notification configurations, but cannot modify or remove them. A viewer cannot configure, modify, or remove integrations. Additional resources To learn more about User Access on the Red Hat Hybrid Cloud Console, see the User Access Configuration Guide for Role-based Access Control (RBAC) . 2.1. Creating and configuring a notifications group in the Hybrid Cloud Console An Organization Administrator of a Hybrid Cloud Console account creates a group with the Notifications administrator role and adds members to the group. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console as an Organization Administrator. Procedure Click Settings . Under Identity & Access Management , click User Access . In the left navigation panel, expand User Access if necessary and then click Groups . Click Create group . Enter a group name, for example, Notifications administrators , and a description, and then click . Select the role to add to this group, in this case Notifications administrator , and then click . Add members to the group: Search for individual users or filter by username, email, or status. Check the box to each intended member's name, and then click . On the Review details screen, click Submit to finish creating the group. 2.2. Editing or removing a User Access group You can make changes to an existing User Access group in the Red Hat Hybrid Cloud Console and you can delete groups that are no longer needed. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console and meet one of the following criteria: You are a user with Organization Administrator permissions. You are a member of a group that has the User Access administrator role assigned to it. Procedure Navigate to Red Hat Hybrid Cloud Console > Settings > Identity & Access Management > User Access > Groups . Click the options icon (...) on the far right of the group name row, and then click Edit or Delete . Make and save changes or delete the group.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console/assembly-config-user-access_notifications
|
Chapter 2. Example deployment: High availability cluster with Compute and Ceph
|
Chapter 2. Example deployment: High availability cluster with Compute and Ceph This example scenario shows the architecture, hardware and network specifications, and the undercloud and overcloud configuration files for a high availability deployment with the OpenStack Compute service and Red Hat Ceph Storage. Important This deployment is intended to use as a reference for test environments and is not supported for production environments. Figure 2.1. Example high availability deployment architecture 2.1. Example high availability hardware specifications The example HA deployment uses a specific hardware configuration. You can adjust the CPU, memory, storage, or NICs as needed in your own test deployment. Table 2.1. Physical computers Number of Computers Purpose CPUs Memory Disk Space Power Management NICs 1 undercloud node 4 24 GB 40 GB IPMI 2 (1 external; 1 on provisioning) + 1 IPMI 3 Controller nodes 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 3 Ceph Storage nodes 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 2 Compute nodes (add more as needed) 4 24 GB 40 GB IPMI 3 (2 bonded on overcloud; 1 on provisioning) + 1 IPMI 2.2. Example high availability network specifications The example HA deployment uses a specific virtual and physical network configuration. You can adjust the configuration as needed in your own test deployment. Note This example does not include hardware redundancy for the control plane and the provisioning network where the overcloud keystone admin endpoint is configured. For information about planning your high availability networking, see Section 1.3, "Planning high availability networking" . Table 2.2. Physical and virtual networks Physical NICs Purpose VLANs Description eth0 Provisioning network (undercloud) N/A Manages all nodes from director (undercloud) eth1 and eth2 Controller/External (overcloud) N/A Bonded NICs with VLANs External network VLAN 100 Allows access from outside the environment to the project networks, internal API, and OpenStack Horizon Dashboard Internal API VLAN 201 Provides access to the internal API between Compute nodes and Controller nodes Storage access VLAN 202 Connects Compute nodes to storage media Storage management VLAN 203 Manages storage media Project network VLAN 204 Provides project network services to RHOSP 2.3. Example high availability undercloud configuration files The example HA deployment uses the undercloud configuration files instackenv.json , undercloud.conf , and network-environment.yaml . instackenv.json undercloud.conf network-environment.yaml 2.4. Example high availability overcloud configuration files The example HA deployment uses the overcloud configuration files haproxy.cfg , corosync.cfg , and ceph.cfg . /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg (Controller nodes) This file identifies the services that HAProxy manages. It contains the settings for the services that HAProxy monitors. This file is identical on all Controller nodes. /etc/corosync/corosync.conf file (Controller nodes) This file defines the cluster infrastructure, and is available on all Controller nodes. /etc/ceph/ceph.conf (Ceph nodes) This file contains Ceph high availability settings, including the hostnames and IP addresses of the monitoring hosts. 2.5. Additional resources Deploying an Overcloud with Containerized Red Hat Ceph Chapter 1, Red Hat OpenStack Platform high availability overview and planning
|
[
"{ \"nodes\": [ { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.11\", \"mac\": [ \"2c:c2:60:3b:b3:94\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.12\", \"mac\": [ \"2c:c2:60:51:b7:fb\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.13\", \"mac\": [ \"2c:c2:60:76:ce:a5\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.51\", \"mac\": [ \"2c:c2:60:08:b1:e2\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.52\", \"mac\": [ \"2c:c2:60:20:a1:9e\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.53\", \"mac\": [ \"2c:c2:60:58:10:33\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"1\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.101\", \"mac\": [ \"2c:c2:60:31:a9:55\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"2\", \"pm_user\": \"admin\" }, { \"pm_password\": \"testpass\", \"memory\": \"24\", \"pm_addr\": \"10.100.0.102\", \"mac\": [ \"2c:c2:60:0d:e7:d1\" ], \"pm_type\": \"ipmi\", \"disk\": \"40\", \"arch\": \"x86_64\", \"cpu\": \"2\", \"pm_user\": \"admin\" } ], \"overcloud\": {\"password\": \"7adbbbeedc5b7a07ba1917e1b3b228334f9a2d4e\", \"endpoint\": \"http://192.168.1.150:5000/v2.0/\" } }",
"[DEFAULT] image_path = /home/stack/images local_ip = 10.200.0.1/24 undercloud_public_vip = 10.200.0.2 undercloud_admin_vip = 10.200.0.3 undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem local_interface = eth0 masquerade_network = 10.200.0.0/24 dhcp_start = 10.200.0.5 dhcp_end = 10.200.0.24 network_cidr = 10.200.0.0/24 network_gateway = 10.200.0.1 #discovery_interface = br-ctlplane discovery_iprange = 10.200.0.150,10.200.0.200 discovery_runbench = 1 undercloud_admin_password = testpass",
"resource_registry: OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/cinder-storage.yaml OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml parameter_defaults: InternalApiNetCidr: 172.16.0.0/24 TenantNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 ExternalNetCidr: 192.168.1.0/24 InternalApiAllocationPools: [{ start : 172.16.0.10 , end : 172.16.0.200 }] TenantAllocationPools: [{ start : 172.17.0.10 , end : 172.17.0.200 }] StorageAllocationPools: [{ start : 172.18.0.10 , end : 172.18.0.200 }] StorageMgmtAllocationPools: [{ start : 172.19.0.10 , end : 172.19.0.200 }] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{ start : 192.168.1.150 , end : 192.168.1.199 }] InternalApiNetworkVlanID: 201 StorageNetworkVlanID: 202 StorageMgmtNetworkVlanID: 203 TenantNetworkVlanID: 204 ExternalNetworkVlanID: 100 # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 192.168.1.1 # Set to \"br-ex\" if using floating IPs on native VLAN on bridge br-ex NeutronExternalNetworkBridge: \"''\" # Customize bonding options if required BondInterfaceOvsOptions: \"bond_mode=active-backup lacp=off other_config:bond-miimon-interval=100\"",
"This file is managed by Puppet global daemon group haproxy log /dev/log local0 maxconn 20480 pidfile /var/run/haproxy.pid ssl-default-bind-ciphers !SSLv2:kEECDH:kRSA:kEDH:kPSK:+3DES:!aNULL:!eNULL:!MD5:!EXP:!RC4:!SEED:!IDEA:!DES ssl-default-bind-options no-sslv3 stats socket /var/lib/haproxy/stats mode 600 level user stats timeout 2m user haproxy defaults log global maxconn 4096 mode tcp retries 3 timeout http-request 10s timeout queue 2m timeout connect 10s timeout client 2m timeout server 2m timeout check 10s listen aodh bind 192.168.1.150:8042 transparent bind 172.16.0.10:8042 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8042 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8042 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8042 check fall 5 inter 2000 rise 2 listen cinder bind 192.168.1.150:8776 transparent bind 172.16.0.10:8776 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8776 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8776 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8776 check fall 5 inter 2000 rise 2 listen glance_api bind 192.168.1.150:9292 transparent bind 172.18.0.10:9292 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /healthcheck server overcloud-controller-0.internalapi.localdomain 172.18.0.17:9292 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.18.0.15:9292 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.18.0.16:9292 check fall 5 inter 2000 rise 2 listen gnocchi bind 192.168.1.150:8041 transparent bind 172.16.0.10:8041 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8041 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8041 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8041 check fall 5 inter 2000 rise 2 listen haproxy.stats bind 10.200.0.6:1993 transparent mode http stats enable stats uri / stats auth admin:PnDD32EzdVCf73CpjHhFGHZdV listen heat_api bind 192.168.1.150:8004 transparent bind 172.16.0.10:8004 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk timeout client 10m timeout server 10m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8004 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8004 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8004 check fall 5 inter 2000 rise 2 listen heat_cfn bind 192.168.1.150:8000 transparent bind 172.16.0.10:8000 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk timeout client 10m timeout server 10m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8000 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8000 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8000 check fall 5 inter 2000 rise 2 listen horizon bind 192.168.1.150:80 transparent bind 172.16.0.10:80 transparent mode http cookie SERVERID insert indirect nocache option forwardfor option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:80 check cookie overcloud-controller-0 fall 5 inter 2000 rise 2 listen keystone_admin bind 192.168.24.15:35357 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /v3 server overcloud-controller-0.ctlplane.localdomain 192.168.24.9:35357 check fall 5 inter 2000 rise 2 server overcloud-controller-1.ctlplane.localdomain 192.168.24.8:35357 check fall 5 inter 2000 rise 2 server overcloud-controller-2.ctlplane.localdomain 192.168.24.18:35357 check fall 5 inter 2000 rise 2 listen keystone_public bind 192.168.1.150:5000 transparent bind 172.16.0.10:5000 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk GET /v3 server overcloud-controller-0.internalapi.localdomain 172.16.0.13:5000 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:5000 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:5000 check fall 5 inter 2000 rise 2 listen mysql bind 172.16.0.10:3306 transparent option tcpka option httpchk stick on dst stick-table type ip size 1000 timeout client 90m timeout server 90m server overcloud-controller-0.internalapi.localdomain 172.16.0.13:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:3306 backup check inter 1s on-marked-down shutdown-sessions port 9200 listen neutron bind 192.168.1.150:9696 transparent bind 172.16.0.10:9696 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:9696 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:9696 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:9696 check fall 5 inter 2000 rise 2 listen nova_metadata bind 172.16.0.10:8775 transparent option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8775 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8775 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8775 check fall 5 inter 2000 rise 2 listen nova_novncproxy bind 192.168.1.150:6080 transparent bind 172.16.0.10:6080 transparent balance source http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option tcpka timeout tunnel 1h server overcloud-controller-0.internalapi.localdomain 172.16.0.13:6080 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:6080 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:6080 check fall 5 inter 2000 rise 2 listen nova_osapi bind 192.168.1.150:8774 transparent bind 172.16.0.10:8774 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8774 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8774 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8774 check fall 5 inter 2000 rise 2 listen nova_placement bind 192.168.1.150:8778 transparent bind 172.16.0.10:8778 transparent mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8778 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8778 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8778 check fall 5 inter 2000 rise 2 listen panko bind 192.168.1.150:8977 transparent bind 172.16.0.10:8977 transparent http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0.internalapi.localdomain 172.16.0.13:8977 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:8977 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:8977 check fall 5 inter 2000 rise 2 listen redis bind 172.16.0.13:6379 transparent balance first option tcp-check tcp-check send AUTH\\ V2EgUh2pvkr8VzU6yuE4XHsr9\\r\\n tcp-check send PING\\r\\n tcp-check expect string +PONG tcp-check send info\\ replication\\r\\n tcp-check expect string role:master tcp-check send QUIT\\r\\n tcp-check expect string +OK server overcloud-controller-0.internalapi.localdomain 172.16.0.13:6379 check fall 5 inter 2000 rise 2 server overcloud-controller-1.internalapi.localdomain 172.16.0.14:6379 check fall 5 inter 2000 rise 2 server overcloud-controller-2.internalapi.localdomain 172.16.0.15:6379 check fall 5 inter 2000 rise 2 listen swift_proxy_server bind 192.168.1.150:8080 transparent bind 172.18.0.10:8080 transparent option httpchk GET /healthcheck timeout client 2m timeout server 2m server overcloud-controller-0.storage.localdomain 172.18.0.17:8080 check fall 5 inter 2000 rise 2 server overcloud-controller-1.storage.localdomain 172.18.0.15:8080 check fall 5 inter 2000 rise 2 server overcloud-controller-2.storage.localdomain 172.18.0.16:8080 check fall 5 inter 2000 rise 2",
"totem { version: 2 cluster_name: tripleo_cluster transport: udpu token: 10000 } nodelist { node { ring0_addr: overcloud-controller-0 nodeid: 1 } node { ring0_addr: overcloud-controller-1 nodeid: 2 } node { ring0_addr: overcloud-controller-2 nodeid: 3 } } quorum { provider: corosync_votequorum } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes }",
"[global] osd_pool_default_pgp_num = 128 osd_pool_default_min_size = 1 auth_service_required = cephx mon_initial_members = overcloud-controller-0 , overcloud-controller-1 , overcloud-controller-2 fsid = 8c835acc-6838-11e5-bb96-2cc260178a92 cluster_network = 172.19.0.11/24 auth_supported = cephx auth_cluster_required = cephx mon_host = 172.18.0.17,172.18.0.15,172.18.0.16 auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_pg_num = 128 public_network = 172.18.0.17/24"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_deployment_and_usage/assembly_example-ha-deployment_rhosp
|
4.368. cluster
|
4.368. cluster 4.368.1. RHBA-2013:1054 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market. Bug Fix BZ# 982698 Previously, the cman init script did not handle its lock file correctly. During a node reboot, this could have caused the node itself to be evicted from the cluster by other members. With this update, the cman init script now handles the lock file correctly, and no fencing action is taken by other nodes of the cluster. Users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cluster
|
function::user_string_quoted_utf32
|
function::user_string_quoted_utf32 Name function::user_string_quoted_utf32 - Quote given user UTF-32 string. Synopsis Arguments addr The user address to retrieve the string from Description This function combines quoting as per string_quoted and UTF-32 decoding as per user_string_utf32 .
|
[
"user_string_quoted_utf32:string(addr:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-quoted-utf32
|
4.131. libcmpiutil
|
4.131. libcmpiutil 4.131.1. RHEA-2011:1586 - libcmpiutil enhancement update An updated libcmpiutil package that adds one enhancement is now available for Red Hat Enterprise Linux 6. The libcmpiutil library provides an application programming interface (API) for performing common tasks with various Common Manageability Programming Interface (CMPI) providers. Enhancement BZ# 694550 With this update, the performance and the interface of the libcmpiutil library have been enhanced, which is used by the libvirt-cim package. All libcmpiutil users are advised to upgrade to this updated package, which adds this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libcmpiutil
|
Extension APIs
|
Extension APIs OpenShift Container Platform 4.13 Reference guide for extension APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/extension_apis/index
|
Chapter 8. Performing health checks on Red Hat Quay deployments
|
Chapter 8. Performing health checks on Red Hat Quay deployments Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios. Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system's current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker. 8.1. Red Hat Quay health check endpoints Important Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output. Table 8.1. Health check endpoints Health check Description Endpoint Example output instance The instance endpoint acquires the entire status of the specific Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , disk_space , registry_gunicorn , service_key , and web_gunicorn. Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/instance or https://{quay-ip-endpoint}/health {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} endtoend The endtoend endpoint conducts checks on all services of your Red Hat Quay instance. Returns a dict with key-value pairs for the following: auth , database , redis , storage . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/endtoend {"data":{"services":{"auth":true,"database":true,"redis":true,"storage":true}},"status_code":200} warning The warning endpoint conducts a check on the warnings. Returns a dict with key-value pairs for the following: disk_space_warning . Returns a number indicating the health check response of either 200 , which indicates that the instance is healthy, or 503 , which indicates an issue with your deployment. https://{quay-ip-endpoint}/health/warning {"data":{"services":{"disk_space_warning":true}},"status_code":503} 8.2. Navigating to a Red Hat Quay health check endpoint Use the following procedure to navigate to the instance endpoint. This procedure can be repeated for endtoend and warning endpoints. Procedure On your web browser, navigate to https://{quay-ip-endpoint}/health/instance . You are taken to the health instance page, which returns information like the following: {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} For Red Hat Quay, "status_code": 200 means that the instance is health. Conversely, if you receive "status_code": 503 , there is an issue with your deployment. Additional resources
|
[
"{\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploy_red_hat_quay_-_high_availability/health-check-quay
|
Chapter 11. Users and Roles Administration
|
Chapter 11. Users and Roles Administration 11.1. User Roles There are three user roles available for Web Administration. Admin : The Admin role gives complete rights to the user to manage all Web Administration operations. Normal User : The Normal User role authorizes the user to perform operations such as importing cluster and enabling or disabling volume profiling but restricts managing users and other administrative operations. Read-only User : Read-only: The Read-only User role authorizes the user to only view and monitor cluster-wide metrics and readable data. The user can launch Grafana dashboards from the Web Administration interface but is restricted to perform any storage operations. This role is suited for users performing monitoring tasks. 11.2. Configuring Roles To add and configure a new user, follow these steps: Log In the Web Administration interface and in navigation pane, click Admin > Users . The users list is displayed. To add a new user, click Add at the right-hand side. Enter the user information in the given fields. To enable or disable email notifications, toggle the ON-OFF button. Select a Role from the available three roles and click Save . The new user is successfully created. 11.2.1. Editing Users To edit an existing user: Navigate to the user view by clicking Admin > Users from the interface navigation. Locate the user to be edited and click Edit at the right-hand side. Edit the required information and click Save . 11.2.2. Disabling Notifications and Deleting User Enabling and Disabling Notifications To enable notifications: Navigate to the user view by clicking Admin > Users from the interface navigation. Click the vertical elipsis to the Edit button and click Disable Email Notification from the callout menu. Email notification is successfuly disabled for the user. Deleting User To delete an existing user: Navigate to the user view by clicking Admin > Users from the interface navigation. Locate the user to be deleted and click the vertical elipsis to the Edit button. A callout menu opens, click Delete User . A confirmation box appears. Click Delete .
| null |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/monitoring_guide/users_and_roles_administration
|
13.4. Using virtual list view control to request a contiguous subset of a large search result
|
13.4. Using virtual list view control to request a contiguous subset of a large search result Directory Server supports the LDAP virtual list view control. This control enables an LDAP client to request a contiguous subset of a large search result. For example, you have stored an address book with 100.000 entries in Directory Server. By default, a query for all entries returns all entries at once. This is a resource and time-consuming operation, and clients often do not require the whole data set because, if the user scrolls through the results, only a partial set is visible. However, if the client uses the VLV control, the server only returns a subset and, for example, if the user scrolls in the client application, the server returns more entries. This reduces the load on the server, and the client does not need to store and process all data at once. VLV also improves the performance of server-sorted searches when all search parameters are fixed. Directory Server pre-computes the search results within the VLV index. Therefore, the VLV index is much more efficient than retrieving the results and sorting them afterwards. In Directory Server, the VLV control is always available. However, if you use it in a large directory, a VLV index, also called browsing index, can significantly improve the speed. Directory Server does not maintain VLV indexes for attributes, such as for standard indexes. The server generates VLV indexes dynamically based on attributes set in entries and the location of those entries in the directory tree. Unlike standard entries, VLV entries are special entries in the database. 13.4.1. How the VLV control works in ldapsearch commands Typically, you use the virtual list view (VLV) feature in LDAP client applications. However, for example for testing purposes, you can use the ldapsearch utility to request only partial results. To use the VLV feature in ldapsearch commands, specify the -E option for both the sss (server-side sorting) and vlv search extensions: The sss search extension has the following syntax: The vlv search extension has the following syntax: before sets the number of entries returned before the targeted one. after sets the number of entries returned after the targeted one. index , count , and value help to determine the target entry. If you set value , the target entry is the first one having its first sorting attribute starting with the value. Otherwise, you set count to 0 , and the target entry is determined by the index value (starting from 1). If the count value is higher than 0 , the target entry is determined by the ratio index * number of entries / count . Example 13.1. Output of an ldapsearch command with VLV search extension The following command searches in ou=People,dc=example,dc=com . The server then sorts the results by the cn attribute and returns the uid attributes of the 70th entry together with one entry before and two entries after the offset. For additional details, see the -E parameter description in the ldapsearch (1) man page. 13.4.2. Enabling unauthenticated users to use the VLV control By default, the access control instruction (ACI) in the oid=2.16.840.1.113730.3.4.9,cn=features,cn=config entry enables only authenticated users to use the VLV control. To enable also non-authenticated users to use the VLV control, update the ACI by changing userdn = "ldap:///all" to userdn = "ldap:///anyone" . Procedure Update the ACI in oid=2.16.840.1.113730.3.4.9,cn=features,cn=config : Verification Perform a query with VLV control not specify a bind user: This command requires that the server allows anonymous binds. If the command succeeds but returns no entries, run the query again with a bind user to ensure that the query works when using authentication. 13.4.3. Creating a VLV index using the command line to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Create the VLV search entry: This command uses the following options: --name sets the name of the search entry. This can be any name. --search-base sets the base DN for the VLV index. Directory Server creates the VLV index on this entry. --search-scope sets the scope of the search to run for entries in the VLV index. You can set this option to 0 (base search), 1 (one-level search), or 2 (subtree search). --search-filter sets the filter Directory Server applies when it creates the VLV index. Only entries that match this filter become part of the index. userRoot is the name of the database in which to create the entry. Create the index entry: This command uses the following options: --index-name sets the name of the index entry. This can be any name. --parent-name sets the name of the VLV search entry and must match the name you set in the step. --sort sets the attribute names and their sort order. Separate the attributes by space. --index-it causes that Directory Server automatically starts an index task after the entry was created. dc=example,dc=com is the suffix of the database in which to create the entry. Verification Verify the successful creation of the VLV index in the /var/log/dirsrv/slapd-instance_name/errors file: Use the VLV control in an ldapsearch command to query only specific records from the directory: This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . For additional details, see the -E parameter description in the ldapsearch (1) man page. 13.4.4. Creating a VLV index using the web console to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Navigate to Database Suffixes dc=example,dc=com VLV Indexes Click Create VLV Index and fill the fields: Figure 13.1. Creating a VLV Index Using the Web Console Enter the attribute names, and click Add Sort Index . Select Index VLV on Save . Click Save VLV Index . Verification Navigate to Monitoring Logging Errors Log Use the VLV control in an ldapsearch command to query only specific records from the directory: This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . For additional details, see the -E parameter description in the ldapsearch (1) man page.
|
[
"ldapsearch ... -E 'sss=attribute_list' -E 'vlv=query_options'",
"[!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...]",
"[!]vlv=<before>/<after>(/<offset>/<count>|:<value>)",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com uid: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com uid: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com uid: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com uid: user072 search result search: 2 result: 0 Success control: 1.2.840.113556.1.4.474 false MIQAAAADCgEA sortResult: (0) Success control: 2.16.840.1.113730.3.4.10 false MIQAAAALAgFGAgMAnaQKAQA= vlvResult: pos=70 count=40356 context= (0) Success numResponses: 5 numEntries: 4 Press [before/after(/offset/count|:value)] Enter for the next window.",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config changetype: modify replace: aci aci: (targetattr != \"aci\")(version 3.0; acl \"VLV Request Control\"; allow( read, search, compare, proxy ) userdn = \"ldap:///anyone\";)",
"ldapsearch -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend vlv-index add-search --name \" VLV People \" --search-base \" ou=People,dc=example,dc=com \" --search-filter \" (&(objectClass=person)(mail=*)) \" --search-scope 2 userRoot",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend vlv-index add-index --index-name \" VLV People - cn sn \" --parent-name \" VLV People \" --sort \" cn sn \" --index-it dc=example,dc=com",
"[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \" ou=People,dc=example,dc=com \" -s one -x -E ' sss=cn ' -E ' vlv=1/2/70/0 ' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072",
"[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \" ou=People,dc=example,dc=com \" -s one -x -E ' sss=cn ' -E ' vlv=1/2/70/0 ' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/creating_indexes-creating_vlv_indexes
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/making-open-source-more-inclusive
|
B.25. gdm
|
B.25. gdm B.25.1. RHSA-2011:0395 - Moderate: gdm security update Updated gdm packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The GNOME Display Manager (GDM) provides the graphical login screen, shown shortly after boot up, log out, and when user-switching. CVE-2011-0727 A race condition flaw was found in the way GDM handled the cache directories used to store users' dmrc and face icon files. A local attacker could use this flaw to trick GDM into changing the ownership of an arbitrary file via a symbolic link attack, allowing them to escalate their privileges. Red Hat would like to thank Sebastian Krahmer of the SuSE Security Team for reporting this issue. All users should upgrade to these updated packages, which contain a backported patch to correct this issue. GDM must be restarted for this update to take effect. Rebooting achieves this, but changing the runlevel from 5 to 3 and back to 5 also restarts GDM.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/gdm
|
Release notes for Red Hat build of OpenJDK 8.0.372
|
Release notes for Red Hat build of OpenJDK 8.0.372 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.372/index
|
Updating clusters
|
Updating clusters OpenShift Container Platform 4.12 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/index
|
Chapter 14. Replacing storage nodes
|
Chapter 14. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 14.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 14.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 14.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 14.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support .
|
[
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_nodes
|
Chapter 13. Using TLS certificates for applications accessing RGW
|
Chapter 13. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert .
|
[
"oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d",
"oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/using-tls-certificates-for-applications-accessing-rgw_rhodf
|
Chapter 12. Bare metal builds with Red Hat Quay on OpenShift Container Platform
|
Chapter 12. Bare metal builds with Red Hat Quay on OpenShift Container Platform Documentation for the builds feature has been moved to Builders and image automation . This chapter will be removed in a future version of Red Hat Quay.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/bare-metal-builds
|
18.4. RAID Support in the Anaconda Installer
|
18.4. RAID Support in the Anaconda Installer The Anaconda installer automatically detects any hardware and firmware RAID sets on a system, making them available for installation. Anaconda also supports software RAID using mdraid , and can recognize existing mdraid sets. Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, create a partition on it spanning the entire disk, and use that partition as the RAID set member. When the root file system uses a RAID set, Anaconda adds special kernel command-line options to the bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root file system. For instructions on configuring RAID during installation, see the Red Hat Enterprise Linux 7 Installation Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/raidinstall
|
1.3. Vulnerability Assessment
|
1.3. Vulnerability Assessment Given time, resources, and motivation, an attacker can break into nearly any system. All of the security procedures and technologies currently available cannot guarantee that any systems are completely safe from intrusion. Routers help secure gateways to the Internet. Firewalls help secure the edge of the network. Virtual Private Networks safely pass data in an encrypted stream. Intrusion detection systems warn you of malicious activity. However, the success of each of these technologies is dependent upon a number of variables, including: The expertise of the staff responsible for configuring, monitoring, and maintaining the technologies. The ability to patch and update services and kernels quickly and efficiently. The ability of those responsible to keep constant vigilance over the network. Given the dynamic state of data systems and technologies, securing corporate resources can be quite complex. Due to this complexity, it is often difficult to find expert resources for all of your systems. While it is possible to have personnel knowledgeable in many areas of information security at a high level, it is difficult to retain staff who are experts in more than a few subject areas. This is mainly because each subject area of information security requires constant attention and focus. Information security does not stand still. A vulnerability assessment is an internal audit of your network and system security; the results of which indicate the confidentiality, integrity, and availability of your network (as explained in Section 1.1.1, "Standardizing Security" ). Typically, vulnerability assessment starts with a reconnaissance phase, during which important data regarding the target systems and resources is gathered. This phase leads to the system readiness phase, whereby the target is essentially checked for all known vulnerabilities. The readiness phase culminates in the reporting phase, where the findings are classified into categories of high, medium, and low risk; and methods for improving the security (or mitigating the risk of vulnerability) of the target are discussed If you were to perform a vulnerability assessment of your home, you would likely check each door to your home to see if they are closed and locked. You would also check every window, making sure that they closed completely and latch correctly. This same concept applies to systems, networks, and electronic data. Malicious users are the thieves and vandals of your data. Focus on their tools, mentality, and motivations, and you can then react swiftly to their actions. 1.3.1. Defining Assessment and Testing Vulnerability assessments may be broken down into one of two types: outside looking in and inside looking around . When performing an outside-looking-in vulnerability assessment, you are attempting to compromise your systems from the outside. Being external to your company provides you with the cracker's viewpoint. You see what a cracker sees - publicly-routable IP addresses, systems on your DMZ , external interfaces of your firewall, and more. DMZ stands for "demilitarized zone", which corresponds to a computer or small subnetwork that sits between a trusted internal network, such as a corporate private LAN, and an untrusted external network, such as the public Internet. Typically, the DMZ contains devices accessible to Internet traffic, such as Web (HTTP) servers, FTP servers, SMTP (e-mail) servers and DNS servers. When you perform an inside-looking-around vulnerability assessment, you are at an advantage since you are internal and your status is elevated to trusted. This is the viewpoint you and your co-workers have once logged on to your systems. You see print servers, file servers, databases, and other resources. There are striking distinctions between the two types of vulnerability assessments. Being internal to your company gives you more privileges than an outsider. In most organizations, security is configured to keep intruders out. Very little is done to secure the internals of the organization (such as departmental firewalls, user-level access controls, and authentication procedures for internal resources). Typically, there are many more resources when looking around inside as most systems are internal to a company. Once you are outside the company, your status is untrusted. The systems and resources available to you externally are usually very limited. Consider the difference between vulnerability assessments and penetration tests . Think of a vulnerability assessment as the first step to a penetration test. The information gleaned from the assessment is used for testing. Whereas the assessment is undertaken to check for holes and potential vulnerabilities, the penetration testing actually attempts to exploit the findings. Assessing network infrastructure is a dynamic process. Security, both information and physical, is dynamic. Performing an assessment shows an overview, which can turn up false positives and false negatives. A false positive is a result, where the tool finds vulnerabilities which in reality do not exist. A false negative is when it omits actual vulnerabilities. Security administrators are only as good as the tools they use and the knowledge they retain. Take any of the assessment tools currently available, run them against your system, and it is almost a guarantee that there are some false positives. Whether by program fault or user error, the result is the same. The tool may find false positives, or, even worse, false negatives. Now that the difference between a vulnerability assessment and a penetration test is defined, take the findings of the assessment and review them carefully before conducting a penetration test as part of your new best practices approach. Warning Do not attempt to exploit vulnerabilities on production systems. Doing so can have adverse effects on productivity and efficiency of your systems and network. The following list examines some of the benefits to performing vulnerability assessments. Creates proactive focus on information security. Finds potential exploits before crackers find them. Results in systems being kept up to date and patched. Promotes growth and aids in developing staff expertise. Abates financial loss and negative publicity. 1.3.2. Establishing a Methodology for Vulnerability Assessment To aid in the selection of tools for a vulnerability assessment, it is helpful to establish a vulnerability assessment methodology. Unfortunately, there is no predefined or industry approved methodology at this time; however, common sense and best practices can act as a sufficient guide. What is the target? Are we looking at one server, or are we looking at our entire network and everything within the network? Are we external or internal to the company? The answers to these questions are important as they help determine not only which tools to select but also the manner in which they are used. To learn more about establishing methodologies, see the following website: https://www.owasp.org/ - The Open Web Application Security Project 1.3.3. Vulnerability Assessment Tools An assessment can start by using some form of an information-gathering tool. When assessing the entire network, map the layout first to find the hosts that are running. Once located, examine each host individually. Focusing on these hosts requires another set of tools. Knowing which tools to use may be the most crucial step in finding vulnerabilities. Just as in any aspect of everyday life, there are many different tools that perform the same job. This concept applies to performing vulnerability assessments as well. There are tools specific to operating systems, applications, and even networks (based on the protocols used). Some tools are free; others are not. Some tools are intuitive and easy to use, while others are cryptic and poorly documented but have features that other tools do not. Finding the right tools may be a daunting task and, in the end, experience counts. If possible, set up a test lab and try out as many tools as you can, noting the strengths and weaknesses of each. Review the README file or man page for the tools. Additionally, look to the Internet for more information, such as articles, step-by-step guides, or even mailing lists specific to the tools. The tools discussed below are just a small sampling of the available tools. 1.3.3.1. Scanning Hosts with Nmap Nmap is a popular tool that can be used to determine the layout of a network. Nmap has been available for many years and is probably the most often used tool when gathering information. An excellent manual page is included that provides detailed descriptions of its options and usage. Administrators can use Nmap on a network to find host systems and open ports on those systems. Nmap is a competent first step in vulnerability assessment. You can map out all the hosts within your network and even pass an option that allows Nmap to attempt to identify the operating system running on a particular host. Nmap is a good foundation for establishing a policy of using secure services and restricting unused services. To install Nmap , run the yum install nmap command as the root user. 1.3.3.1.1. Using Nmap Nmap can be run from a shell prompt by typing the nmap command followed by the host name or IP address of the machine to scan: nmap <hostname> For example, to scan a machine with host name foo.example.com , type the following at a shell prompt: The results of a basic scan (which could take up to a few minutes, depending on where the host is located and other network conditions) look similar to the following: Nmap tests the most common network communication ports for listening or waiting services. This knowledge can be helpful to an administrator who wants to close unnecessary or unused services. For more information about using Nmap , see the official homepage at the following URL: http://www.insecure.org/ 1.3.3.2. Nessus Nessus is a full-service security scanner. The plug-in architecture of Nessus allows users to customize it for their systems and networks. As with any scanner, Nessus is only as good as the signature database it relies upon. Fortunately, Nessus is frequently updated and features full reporting, host scanning, and real-time vulnerability searches. Remember that there could be false positives and false negatives, even in a tool as powerful and as frequently updated as Nessus . Note The Nessus client and server software requires a subscription to use. It has been included in this document as a reference to users who may be interested in using this popular application. For more information about Nessus , see the official website at the following URL: http://www.nessus.org/ 1.3.3.3. OpenVAS OpenVAS ( Open Vulnerability Assessment System ) is a set of tools and services that can be used to scan for vulnerabilities and for a comprehensive vulnerability management. The OpenVAS framework offers a number of web-based, desktop, and command line tools for controlling the various components of the solution. The core functionality of OpenVAS is provided by a security scanner, which makes use of over 33 thousand daily-updated Network Vulnerability Tests ( NVT ). Unlike Nessus (see Section 1.3.3.2, " Nessus " ), OpenVAS does not require any subscription. For more information about OpenVAS, see the official website at the following URL: http://www.openvas.org/ 1.3.3.4. Nikto Nikto is an excellent common gateway interface ( CGI ) script scanner. Nikto not only checks for CGI vulnerabilities but does so in an evasive manner, so as to elude intrusion-detection systems. It comes with thorough documentation which should be carefully reviewed prior to running the program. If you have web servers serving CGI scripts, Nikto can be an excellent resource for checking the security of these servers. More information about Nikto can be found at the following URL: http://cirt.net/nikto2
|
[
"~]USD nmap foo.example.com",
"Interesting ports on foo.example.com: Not shown: 1710 filtered ports PORT STATE SERVICE 22/tcp open ssh 53/tcp open domain 80/tcp open http 113/tcp closed auth"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-vulnerability_assessment
|
2.8.2.4. Other Ports
|
2.8.2.4. Other Ports The Firewall Configuration Tool includes an Other ports section for specifying custom IP ports as being trusted by iptables . For example, to allow IRC and Internet printing protocol (IPP) to pass through the firewall, add the following to the Other ports section: 194:tcp,631:tcp
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Basic_Firewall_Configuration-Other_Ports
|
function::callers
|
function::callers Name function::callers - Return first n elements of kernel stack backtrace Synopsis Arguments n number of levels to descend in the stack (not counting the top level). If n is -1, print the entire stack. Description This function returns a string of the first n hex addresses from the backtrace of the kernel stack. Output may be truncated as per maximum string length (MAXSTRINGLEN).
|
[
"callers:string(n:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-callers
|
Chapter 3. Performing a cluster update
|
Chapter 3. Performing a cluster update 3.1. Updating a cluster using the CLI You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the OpenShift CLI ( oc ). 3.1.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Have a recent Container Storage Interface (CSI) volume snapshot in case you need to restore persistent volumes due to a pod failure. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . Ensure that you address all Upgradeable=False conditions so the cluster allows an update to the minor version. An alert displays at the top of the Cluster Settings page when you have one or more cluster Operators that cannot be updated. You can still update to the available patch update for the minor release you are currently on. Review the list of APIs that were removed in Kubernetes 1.27, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see Preparing to update to OpenShift Container Platform 4.14 . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 3.1.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.1.3. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources For information on which machine configuration changes require a reboot, see the note in About the Machine Config Operator . 3.1.4. Updating a cluster by using the CLI You can use the OpenShift CLI ( oc ) to review and request cluster updates. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Pause all MachineHealthCheck resources. Procedure View the available updates and note the version number of the update that you want to apply: USD oc adm upgrade Example output Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd Note If there are no available updates, updates that are supported but not recommended might still be available. See Updating along a conditional update path for more information. For details and information on how to perform a Control Plane Only channel update, please refer to the Preparing to perform a Control Plane Only update page, listed in the Additional resources section. Based on your organization requirements, set the appropriate update channel. For example, you can set your channel to stable-4.13 or fast-4.13 . For more information about channels, refer to Understanding update channels and releases listed in the Additional resources section. USD oc adm upgrade channel <channel> For example, to set the channel to stable-4.14 : USD oc adm upgrade channel stable-4.14 Important For production clusters, you must subscribe to a stable-* , eus-* , or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. Apply an update: To update to the latest version: USD oc adm upgrade --to-latest=true 1 To update to a specific version: USD oc adm upgrade --to=<version> 1 1 1 <version> is the update version that you obtained from the output of the oc adm upgrade command. Important When using oc adm upgrade --help , there is a listed option for --force . This is heavily discouraged , as using the --force option bypasses cluster-side guards, including release verification and precondition checks. Using --force does not guarantee a successful update. Bypassing guards put the cluster at risk. Review the status of the Cluster Version Operator: USD oc adm upgrade After the update completes, you can confirm that the cluster version has updated to the new version: USD oc adm upgrade Example output Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss. If you are updating your cluster to the minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are updated before deploying workloads that rely on a new feature: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.27.3 ip-10-0-170-223.ec2.internal Ready master 82m v1.27.3 ip-10-0-179-95.ec2.internal Ready worker 70m v1.27.3 ip-10-0-182-134.ec2.internal Ready worker 70m v1.27.3 ip-10-0-211-16.ec2.internal Ready master 82m v1.27.3 ip-10-0-250-100.ec2.internal Ready worker 69m v1.27.3 Additional resources Performing a Control Plane Only update Updating along a conditional update path Understanding update channels and releases 3.1.5. Updating along a conditional update path You can update along a recommended conditional update path using the web console or the OpenShift CLI ( oc ). When a conditional update is not recommended for your cluster, you can update along a conditional update path using the OpenShift CLI ( oc ) 4.10 or later. Procedure To view the description of the update when it is not recommended because a risk might apply, run the following command: USD oc adm upgrade --include-not-recommended If the cluster administrator evaluates the potential known risks and decides it is acceptable for the current cluster, then the administrator can waive the safety guards and proceed the update by running the following command: USD oc adm upgrade --allow-not-recommended --to <version> <.> <.> <version> is the supported but not recommended update version that you obtained from the output of the command. Additional resources Understanding update channels and releases 3.1.6. Changing the update server by using the CLI Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. The default value for upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Procedure Change the upstream parameter value in the cluster version: USD oc patch clusterversion/version --patch '{"spec":{"upstream":"<update-server-url>"}}' --type=merge The <update-server-url> variable specifies the URL for the update server. Example output clusterversion.config.openshift.io/version patched 3.2. Updating a cluster using the web console You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the web console. Note Use the web console or oc adm upgrade channel <channel> to change the update channel. You can follow the steps in Updating a cluster using the CLI to complete the update after you change to a 4.14 channel. 3.2.1. Before updating the OpenShift Container Platform cluster Before updating, consider the following: You have recently backed up etcd. In PodDisruptionBudget , if minAvailable is set to 1 , the nodes are drained to apply pending machine configs that might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. You might need to update the cloud provider resources for the new release if your cluster uses manually maintained credentials. You must review administrator acknowledgement requests, take any recommended actions, and provide the acknowledgement when you are ready. You can perform a partial update by updating the worker or custom pool nodes to accommodate the time it takes to update. You can pause and resume within the progress bar of each pool. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. 3.2.2. Changing the update server by using the web console Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Navigate to Administration Cluster Settings , click version . Click the YAML tab and then edit the upstream parameter value: Example output ... spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1 ... 1 The <update-server-url> variable specifies the URL for the update server. The default upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Click Save . Additional resources Understanding update channels and releases 3.2.3. Pausing a MachineHealthCheck resource by using the web console During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Compute MachineHealthChecks . To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each MachineHealthCheck resource. For example, to add the annotation to the machine-api-termination-handler resource, complete the following steps: Click the Options menu to the machine-api-termination-handler and click Edit annotations . In the Edit annotations dialog, click Add more . In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and click Save . 3.2.4. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.14 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Additional resources Updating installed Operators 3.2.5. Viewing conditional updates in the web console You can view and assess the risks associated with particular updates with conditional updates. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing an advanced update strategy, such as a canary rollout, an EUS update, or a control-plane update. Procedure From the web console, click Administration Cluster settings page and review the contents of the Details tab. You can enable Include supported but not recommended versions in the Select new version dropdown of the Update cluster modal to populate the dropdown list with conditional updates. Note If a Supported but not recommended version is selected, more information is provided with potential issues with the version. Review the notification detailing the potential risks to updating. Additional resources Updating installed Operators Update recommendations and Conditional Updates 3.2.6. Performing a canary rollout update In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to: You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update. You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows. The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start. The rolling update process described in this topic involves: Creating one or more custom machine config pools (MCPs). Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs. Pausing those custom MCPs, which prevents updates to those nodes. Performing the cluster update. Unpausing one custom MCP, which triggers the update on those nodes. Testing the applications on those nodes to make sure the applications work as expected on those newly-updated nodes. Optionally removing the custom labels from the remaining nodes in small batches and testing the applications on those nodes. Note Pausing an MCP should be done with careful consideration and for short periods of time only. If you want to use the canary rollout update process, see Performing a canary rollout update . 3.2.7. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your update fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources About the Machine Config Operator . 3.3. Performing a Control Plane Only update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform <4.y> to <4.y+1>, and then to <4.y+2>. You cannot update from OpenShift Container Platform <4.y> to <4.y+2> directly. However, administrators who want to update between two even-numbered minor versions can do so incurring only a single reboot of non-control plane hosts. Important This update was previously known as an EUS-to-EUS update and is now referred to as a Control Plane Only update. These updates are only viable between even-numbered minor versions of OpenShift Container Platform. There are several caveats to consider when attempting a Control Plane Only update. Control Plane Only updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after updating to the odd-numbered minor version but before updating to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can do a partial update by updating the worker or custom pool nodes to accommodate the time it takes for maintenance. You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed. Until the machine config pools are unpaused and the update is complete, some features and bugs fixes in <4.y+1> and <4.y+2> of OpenShift Container Platform are not available. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do Control Plane Only updates with pools paused. 3.3.1. Performing a Control Plane Only update The following procedure pauses all non-master machine config pools and performs updates from OpenShift Container Platform <4.y> to <4.y+1> to <4.y+2>, then unpauses the previously paused machine config pools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform <4.y+1> and <4.y+2>. Review the release notes and product lifecycles for any layered products and Operator Lifecycle Manager (OLM) Operators. Some may require updates either before or during a Control Plane Only update. Ensure that you are familiar with version-specific prerequisites, such as the removal of deprecated APIs, that are required prior to updating from OpenShift Container Platform <4.y+1> to <4.y+2>. If your cluster uses in-tree vSphere volumes, update vSphere to version 7.0u3L+ or 8.0u2+. Important If you do not update vSphere to 7.0u3L+ or 8.0u2+ before initiating an OpenShift Container Platform update, known issues might occur with your cluster after the update. For more information, see Known Issues with OpenShift 4.12 to 4.13 or 4.13 to 4.14 vSphere CSI Storage Migration . 3.3.1.1. Performing a Control Plane Only update using the web console Prerequisites Verify that machine config pools are unpaused. Have access to the web console as a user with admin privileges. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of Up to date and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, click Compute MachineConfigPools and review the contents of the Update status column. Note If your machine config pools have an Updating status, please wait for this status to change to Up to date . This process could take several minutes. Set your channel to eus-<4.y+2> . To set your channel, click Administration Cluster Settings Channel . You can edit your channel by clicking on the current hyperlinked channel. Pause all worker machine pools except for the master pool. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to pause and click Pause updates . Update to version <4.y+1> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+1> updates are complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. If necessary, update your OLM Operators by using the Administrator perspective on the web console. You can find more information on how to perform these actions in "Updating installed Operators"; see "Additional resources". Update to version <4.y+2> and complete up to the Save step. You can find more information on how to perform these actions in "Updating a cluster by using the web console"; see "Additional resources". Ensure that the <4.y+2> update is complete by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Unpause all previously paused machine config pools. You can perform this action on the MachineConfigPools tab under the Compute page. Select the vertical ellipses to the machine config pool you'd like to unpause and click Unpause updates . Important If pools are paused, the cluster is not permitted to upgrade to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that your cluster has completed the update to version <4.y+2>. You can verify that your pools have updated on the MachineConfigPools tab under the Compute page by confirming that the Update status has a value of Up to date . Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. You can verify that your cluster has completed the update by viewing the Last completed version of your cluster. You can find this information on the Cluster Settings page under the Details tab. Additional resources Updating installed Operators Updating a cluster by using the web console Updating a cluster that includes RHEL compute machines 3.3.1.2. Performing a Control Plane Only update using the CLI Prerequisites Verify that machine config pools are unpaused. Update the OpenShift CLI ( oc ) to the target version before each update. Important It is highly discouraged to skip this prerequisite. If the OpenShift CLI ( oc ) is not updated to the target version before your update, unexpected issues may occur. Procedure Using the Administrator perspective on the web console, update any Operator Lifecycle Manager (OLM) Operators to the versions that are compatible with your intended updated version. You can find more information on how to perform this action in "Updating installed Operators"; see "Additional resources". Verify that all machine config pools display a status of UPDATED and that no machine config pool displays a status of UPDATING . To view the status of all machine config pools, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Your current version is <4.y>, and your intended version to update is <4.y+2>. Change to the eus-<4.y+2> channel by running the following command: USD oc adm upgrade channel eus-<4.y+2> Note If you receive an error message indicating that eus-<4.y+2> is not one of the available channels, this indicates that Red Hat is still rolling out EUS version updates. This rollout process generally takes 45-90 days starting at the GA date. Pause all worker machine pools except for the master pool by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Note You cannot pause the master pool. Update to the latest version by running the following command: USD oc adm upgrade --to-latest Example output Updating to latest version <4.y+1.z> Review the cluster version to ensure that the updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+1.z> ... Update to version <4.y+2> by running the following command: USD oc adm upgrade --to-latest Retrieve the cluster version to ensure that the <4.y+2> updates are complete by running the following command: USD oc adm upgrade Example output Cluster version is <4.y+2.z> ... To update your worker nodes to <4.y+2>, unpause all previously paused machine config pools by running the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Important If pools are not unpaused, the cluster is not permitted to update to any future minor versions, and some maintenance tasks are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools are updated and that the update to version <4.y+2> is complete by running the following command: USD oc get mcp Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) compute machines, those machines temporarily become unavailable during the update process. You must run the upgrade playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. For more information, see "Updating a cluster that includes RHEL compute machines" in the additional resources section. Example output NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False Additional resources Updating installed Operators Updating a cluster that includes RHEL compute machines 3.3.1.3. Performing a Control Plane Only update for layered products and Operators installed through Operator Lifecycle Manager In addition to the Control Plane Only update steps mentioned for the web console and CLI, there are additional steps to consider when performing Control Plane Only updates for clusters with the following: Layered products Operators installed through Operator Lifecycle Manager (OLM) What is a layered product? Layered products refer to products that are made of multiple underlying products that are intended to be used together and cannot be broken into individual subscriptions. For examples of layered OpenShift Container Platform products, see Layered Offering On OpenShift . As you perform a Control Plane Only update for the clusters of layered products and those of Operators that have been installed through OLM, you must complete the following: You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Confirm the cluster version compatibility between the current and intended Operator versions. You can verify which versions your OLM Operators are compatible with by using the Red Hat OpenShift Container Platform Operator Update Information Checker . As an example, here are the steps to perform a Control Plane Only update from <4.y> to <4.y+2> for OpenShift Data Foundation (ODF). This can be done through the CLI or web console. For information on how to update clusters through your desired interface, see Performing a Control Plane Only update using the web console and Performing a Control Plane Only update using the CLI in "Additional resources". Example workflow Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update ODF <4.y> ODF <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to ODF <4.y+2>. Unpause the worker machine pools. Note The update to ODF <4.y+2> can happen before or after worker machine pools have been unpaused. Additional resources Updating installed Operators Performing a Control Plane Only update using the web console Performing a Control Plane Only update using the CLI Preventing workload updates during a Control Plane Only update 3.4. Performing a canary rollout update A canary update is an update strategy where worker node updates are performed in discrete, sequential stages instead of updating all worker nodes at the same time. This strategy can be useful in the following scenarios: You want a more controlled rollout of worker node updates to ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. You want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. You want to fit worker node updates, which often require a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. In these scenarios, you can create multiple custom machine config pools (MCPs) to prevent certain worker nodes from updating when you update the cluster. After the rest of the cluster is updated, you can update those worker nodes in batches at appropriate times. 3.4.1. Example Canary update strategy The following example describes a canary update strategy where you have a cluster with 100 nodes with 10% excess capacity, you have maintenance windows that must not exceed 4 hours, and you know that it takes no longer than 8 minutes to drain and reboot a worker node. Note The values are an example only. The time it takes to drain a node might vary depending on factors such as workloads. Defining custom machine config pools In order to organize the worker node updates into separate stages, you can begin by defining the following MCPs: workerpool-canary with 10 nodes workerpool-A with 30 nodes workerpool-B with 30 nodes workerpool-C with 30 nodes Updating the canary worker pool During your first maintenance window, you pause the MCPs for workerpool-A , workerpool-B , and workerpool-C , and then initiate the cluster update. This updates components that run on top of OpenShift Container Platform and the 10 nodes that are part of the unpaused workerpool-canary MCP. The other three MCPs are not updated because they were paused. Determining whether to proceed with the remaining worker pool updates If for some reason you determine that your cluster or workload health was negatively affected by the workerpool-canary update, you then cordon and drain all nodes in that pool while still maintaining sufficient capacity until you have diagnosed and resolved the problem. When everything is working as expected, you evaluate the cluster and workload health before deciding to unpause, and thus update, workerpool-A , workerpool-B , and workerpool-C in succession during each additional maintenance window. Managing worker node updates using custom MCPs provides flexibility, however it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that might affect the entire cluster. It is recommended that you carefully consider your organizational needs and carefully plan the implementation of the process before you start. Important Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatically renew the certificate, the MCO cannot push the newly rotated certificates to those nodes. This causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. Note It is not recommended to update the MCPs to different OpenShift Container Platform versions. For example, do not update one MCP from 4.y.10 to 4.y.11 and another to 4.y.12. This scenario has not been tested and might result in an undefined cluster state. 3.4.2. About the canary rollout update process and MCPs In OpenShift Container Platform, nodes are not considered individually. Instead, they are grouped into machine config pools (MCPs). By default, nodes in an OpenShift Container Platform cluster are grouped into two MCPs: one for the control plane nodes and one for the worker nodes. An OpenShift Container Platform update affects all MCPs concurrently. During the update, the Machine Config Operator (MCO) drains and cordons all nodes within an MCP up to the specified maxUnavailable number of nodes, if a max number is specified. By default, maxUnavailable is set to 1 . Draining and cordoning a node deschedules all pods on the node and marks the node as unschedulable. After the node is drained, the Machine Config Daemon applies a new machine configuration, which can include updating the operating system (OS). Updating the OS requires the host to reboot. Using custom machine config pools To prevent specific nodes from being updated, you can create custom MCPs. Because the MCO does not update nodes within paused MCPs, you can pause the MCPs containing nodes that you do not want to update before initiating a cluster update. Using one or more custom MCPs can give you more control over the sequence in which you update your worker nodes. For example, after you update the nodes in the first MCP, you can verify the application compatibility and then update the rest of the nodes gradually to the new version. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Note To ensure the stability of the control plane, creating a custom MCP from the control plane nodes is not supported. The Machine Config Operator (MCO) ignores any custom MCP created for the control plane nodes. Considerations when using custom machine config pools Give careful consideration to the number of MCPs that you create and the number of nodes in each MCP, based on your workload deployment topology. For example, if you must fit updates into specific maintenance windows, you must know how many nodes OpenShift Container Platform can update within a given window. This number is dependent on your unique cluster and workload characteristics. You must also consider how much extra capacity is available in your cluster to determine the number of custom MCPs and the amount of nodes within each MCP. In a case where your applications fail to work as expected on newly updated nodes, you can cordon and drain those nodes in the pool, which moves the application pods to other nodes. However, you must determine whether the available nodes in the remaining MCPs can provide sufficient quality-of-service (QoS) for your applications. Note You can use this update process with all documented OpenShift Container Platform update processes. However, the process does not work with Red Hat Enterprise Linux (RHEL) machines, which are updated using Ansible playbooks. 3.4.3. About performing a canary rollout update The following steps outline the high-level workflow of the canary rollout update process: Create custom machine config pools (MCP) based on the worker pool. Note You can change the maxUnavailable setting in an MCP to specify the percentage or the number of machines that can be updating at any given time. The default is 1 . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. Add a node selector to the custom MCPs. For each node that you do not want to update simultaneously with the rest of the cluster, add a matching label to the nodes. This label associates the node to the MCP. Important Do not remove the default worker label from the nodes. The nodes must have a role label to function properly in the cluster. Pause the MCPs you do not want to update as part of the update process. Perform the cluster update. The update process updates the MCPs that are not paused, including the control plane nodes. Test your applications on the updated nodes to ensure they are working as expected. Unpause one of the remaining MCPs, wait for the nodes in that pool to finish updating, and test the applications on those nodes. Repeat this process until all worker nodes are updated. Optional: Remove the custom label from updated nodes and delete the custom MCPs. 3.4.4. Creating machine config pools to perform a canary rollout update To perform a canary rollout update, you must first create one or more custom machine config pools (MCP). Procedure List the worker nodes in your cluster by running the following command: USD oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{"\n"}{end}' nodes Example output ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm For each node that you want to delay, add a custom label to the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>= For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary= Example output node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled Create the new MCP: Create an MCP YAML file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: "" 3 1 Specify a name for the MCP. 2 Specify the worker and custom MCP name. 3 Specify the custom label you added to the nodes that you want in this pool. Create the MachineConfigPool object by running the following command: USD oc create -f <file_name> Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created View the list of MCPs in the cluster and their current state by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m The new machine config pool, workerpool-canary , is created and the number of nodes to which you added the custom label are shown in the machine counts. The worker MCP machine counts are reduced by the same number. It can take several minutes to update the machine counts. In this example, one node was moved from the worker MCP to the workerpool-canary MCP. 3.4.5. Managing machine configuration inheritance for a worker pool canary You can configure a machine config pool (MCP) canary to inherit any MachineConfig assigned to an existing MCP. This configuration is useful when you want to use an MCP canary to test as you update nodes one at a time for an existing MCP. Prerequisites You have created one or more MCPs. Procedure Create a secondary MCP as described in the following two steps: Save the following configuration file as machineConfigPool.yaml . Example machineConfigPool YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: "" # ... Create the new machine config pool by running the following command: USD oc create -f machineConfigPool.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf created Add some machines to the secondary MCP. The following example labels the worker nodes worker-a , worker-b , and worker-c to the MCP worker-perf : USD oc label node worker-a node-role.kubernetes.io/worker-perf='' USD oc label node worker-b node-role.kubernetes.io/worker-perf='' USD oc label node worker-c node-role.kubernetes.io/worker-perf='' Create a new MachineConfig for the MCP worker-perf as described in the following two steps: Save the following MachineConfig example as a file called new-machineconfig.yaml : Example MachineConfig YAML apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M # ... Apply the MachineConfig by running the following command: USD oc create -f new-machineconfig.yaml Create the new canary MCP and add machines from the MCP you created in the steps. The following example creates an MCP called worker-perf-canary , and adds machines from the worker-perf MCP that you previosuly created. Label the canary worker node worker-a by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary='' Remove the canary worker node worker-a from the original MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf- Save the following file as machineConfigPool-Canary.yaml . Example machineConfigPool-Canary.yaml file apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: "" 1 Optional value. This example includes worker-perf-canary as an additional value. You can use a value in this way to configure members of an additional MachineConfig . Create the new worker-perf-canary by running the following command: USD oc create -f machineConfigPool-Canary.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created Check if the MachineConfig is inherited in worker-perf-canary . Verify that no MCP is degraded by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m Verify that the machines are inherited from worker-perf into worker-perf-canary . USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ... worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5 Verify that kdump service is enabled on worker-a by running the following command: USD systemctl status kdump.service Example output NAME STATUS ROLES AGE VERSION ... kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS) Verify that the MCP has updated the crashkernel by running the following command: USD cat /proc/cmdline The output should include the updated crashekernel value, for example: Example output crashkernel=512M Optional: If you are satisfied with the upgrade, you can return worker-a to worker-perf . Return worker-a to worker-perf by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf='' Remove worker-a from the canary MCP by running the following command: USD oc label node worker-a node-role.kubernetes.io/worker-perf-canary- 3.4.6. Pausing the machine config pools After you create your custom machine config pools (MCPs), you then pause those MCPs. Pausing an MCP prevents the Machine Config Operator (MCO) from updating the nodes associated with that MCP. Procedure Patch the MCP that you want paused by running the following command: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":true}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":true}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched 3.4.7. Performing the cluster update After the machine config pools (MCP) enter a ready state, you can perform the cluster update. See one of the following update methods, as appropriate for your cluster: Updating a cluster using the web console Updating a cluster using the CLI After the cluster update is complete, you can begin to unpause the MCPs one at a time. 3.4.8. Unpausing the machine config pools After the OpenShift Container Platform update is complete, unpause your custom machine config pools (MCP) one at a time. Unpausing an MCP allows the Machine Config Operator (MCO) to update the nodes associated with that MCP. Procedure Patch the MCP that you want to unpause: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":false}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":false}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched Optional: Check the progress of the update by using one of the following options: Check the progress from the web console by clicking Administration Cluster settings . Check the progress by running the following command: USD oc get machineconfigpools Test your applications on the updated nodes to ensure that they are working as expected. Repeat this process for any other paused MCPs, one at a time. Note In case of a failure, such as your applications not working on the updated nodes, you can cordon and drain the nodes in the pool, which moves the application pods to other nodes to help maintain the quality-of-service for the applications. This first MCP should be no larger than the excess capacity. 3.4.9. Moving a node to the original machine config pool After you update and verify applications on nodes in a custom machine config pool (MCP), move the nodes back to their original MCP by removing the custom label that you added to the nodes. Important A node must have a role to be properly functioning in the cluster. Procedure For each node in a custom MCP, remove the custom label from the node by running the following command: USD oc label node <node_name> node-role.kubernetes.io/<custom_label>- For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary- Example output node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled The Machine Config Operator moves the nodes back to the original MCP and reconciles the node to the MCP configuration. To ensure that node has been removed from the custom MCP, view the list of MCPs in the cluster and their current state by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m When the node is removed from the custom MCP and moved back to the original MCP, it can take several minutes to update the machine counts. In this example, one node was moved from the removed workerpool-canary MCP to the worker MCP. Optional: Delete the custom MCP by running the following command: USD oc delete mcp <mcp_name> 3.5. Updating a cluster that includes RHEL compute machines You can perform minor version and patch updates on an OpenShift Container Platform cluster. If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must take additional steps to update those machines. 3.5.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Additional resources Support policy for unmanaged Operators 3.5.2. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Pause all MachineHealthCheck resources. You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See "Updating installed Operators" in the "Additional resources" section for more information on how to check compatibility and, if necessary, update the installed Operators. Your machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. Your RHEL7 workers are replaced with RHEL8 or RHCOS workers. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.14 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are updating your cluster to the minor version, for example from version 4.10 to 4.11, confirm that your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. Important When you update a cluster that contains Red Hat Enterprise Linux (RHEL) worker machines, those workers temporarily become unavailable during the update process. You must run the update playbook against each RHEL machine as it enters the NotReady state for the cluster to finish updating. Additional resources Updating installed Operators 3.5.3. Optional: Adding hooks to perform Ansible tasks on RHEL machines You can use hooks to run Ansible tasks on the RHEL compute machines during the OpenShift Container Platform update. 3.5.3.1. About Ansible hooks for updates When you update OpenShift Container Platform, you can run custom tasks on your Red Hat Enterprise Linux (RHEL) nodes during specific operations by using hooks . Hooks allow you to provide files that define tasks to run before or after specific update tasks. You can use hooks to validate or modify custom infrastructure when you update the RHEL compute nodes in you OpenShift Container Platform cluster. Because when a hook fails, the operation fails, you must design hooks that are idempotent, or can run multiple times and provide the same results. Hooks have the following important limitations: - Hooks do not have a defined or versioned interface. They can use internal openshift-ansible variables, but it is possible that the variables will be modified or removed in future OpenShift Container Platform releases. - Hooks do not have error handling, so an error in a hook halts the update process. If you get an error, you must address the problem and then start the update again. 3.5.3.2. Configuring the Ansible inventory file to use hooks You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, in the hosts inventory file under the all:vars section. Prerequisites You have access to the machine that you used to add the RHEL compute machines cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines. Procedure After you design the hook, create a YAML file that defines the Ansible tasks for it. This file must be a set of tasks and cannot be a playbook, as shown in the following example: --- # Trivial example forcing an operator to acknowledge the start of an upgrade # file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - name: require the user agree to start an upgrade pause: prompt: "Press Enter to start the compute machine update" Modify the hosts Ansible inventory file to specify the hook files. The hook files are specified as parameter values in the [all:vars] section, as shown: Example hook definitions in an inventory file To avoid ambiguity in the paths to the hook, use absolute paths instead of a relative paths in their definitions. 3.5.3.3. Available hooks for RHEL compute machines You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) compute machines in your OpenShift Container Platform cluster. Hook name Description openshift_node_pre_cordon_hook Runs before each node is cordoned. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_upgrade_hook Runs after each node is cordoned but before it is updated. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . openshift_node_pre_uncordon_hook Runs after each node is updated but before it is uncordoned. This hook runs against each node in serial. If a task must run against a different host, they task must use delegate_to or local_action . openshift_node_post_upgrade_hook Runs after each node uncordoned. It is the last node update action. This hook runs against each node in serial. If a task must run against a different host, the task must use delegate_to or local_action . 3.5.4. Updating RHEL compute machines in your cluster After you update your cluster, you must update the Red Hat Enterprise Linux (RHEL) compute machines in your cluster. Important Red Hat Enterprise Linux (RHEL) versions 8.6 and later are supported for RHEL compute machines. You can also update your compute machines to another minor version of OpenShift Container Platform if you are using RHEL as the operating system. You do not need to exclude any RPM packages from RHEL when performing a minor version update. Important You cannot update RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. Prerequisites You updated your cluster. Important Because the RHEL machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the RHEL worker machines in it. You have access to the local machine that you used to add the RHEL compute machines to your cluster. You must have access to the hosts Ansible inventory file that defines your RHEL machines and the upgrade playbook. For updates to a minor version, the RPM repository is using the same version of OpenShift Container Platform that is running on your cluster. Procedure Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note By default, the base OS RHEL with "Minimal" installation option enables firewalld service. Having the firewalld service enabled on your host prevents you from accessing OpenShift Container Platform logs on the worker. Do not enable firewalld later if you wish to continue accessing OpenShift Container Platform logs on the worker. Enable the repositories that are required for OpenShift Container Platform 4.14: On the machine that you run the Ansible playbooks, update the required repositories: # subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.14-for-rhel-8-x86_64-rpms Important As of OpenShift Container Platform 4.11, the Ansible playbooks are provided only for RHEL 8. If a RHEL 7 system was used as a host for the OpenShift Container Platform 4.10 Ansible playbooks, you must either update the Ansible host to RHEL 8, or create a new Ansible host on a RHEL 8 system and copy over the inventories from the old Ansible host. On the machine that you run the Ansible playbooks, update the Ansible package: # yum swap ansible ansible-core On the machine that you run the Ansible playbooks, update the required packages, including openshift-ansible : # yum update openshift-ansible openshift-clients On each RHEL compute node, update the required repositories: # subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms \ --enable=rhocp-4.14-for-rhel-8-x86_64-rpms Update a RHEL worker machine: Review your Ansible inventory file at /<path>/inventory/hosts and update its contents so that the RHEL 8 machines are listed in the [workers] section, as shown in the following example: Change to the openshift-ansible directory: USD cd /usr/share/ansible/openshift-ansible Run the upgrade playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. Note The upgrade playbook only updates the OpenShift Container Platform packages. It does not update the operating system packages. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: # oc get node Example output NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.27.3 mycluster-control-plane-1 Ready master 145m v1.27.3 mycluster-control-plane-2 Ready master 145m v1.27.3 mycluster-rhel8-0 Ready worker 98m v1.27.3 mycluster-rhel8-1 Ready worker 98m v1.27.3 mycluster-rhel8-2 Ready worker 98m v1.27.3 mycluster-rhel8-3 Ready worker 98m v1.27.3 Optional: Update the operating system packages that were not updated by the upgrade playbook. To update packages that are not on 4.14, use the following command: # yum update Note You do not need to exclude RPM packages if you are using the same RPM repository that you used when you installed 4.14. 3.6. Updating a cluster in a disconnected environment 3.6.1. About cluster updates in a disconnected environment A disconnected environment is one in which your cluster nodes cannot access the internet. For this reason, you must populate a registry with the installation images. If your registry host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment and then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror registry's host, you can directly push the release images to the local registry. A single container image registry is sufficient to host mirrored images for several clusters in the disconnected network. 3.6.1.1. Mirroring OpenShift Container Platform images To update your cluster in a disconnected environment, your cluster environment must have access to a mirror registry that has the necessary images and resources for your targeted update. The following page has instructions for mirroring images onto a repository in your disconnected cluster: Mirroring OpenShift Container Platform images 3.6.1.2. Performing a cluster update in a disconnected environment You can use one of the following procedures to update a disconnected OpenShift Container Platform cluster: Updating a cluster in a disconnected environment using the OpenShift Update Service Updating a cluster in a disconnected environment without the OpenShift Update Service 3.6.1.3. Uninstalling the OpenShift Update Service from a cluster You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS) from your cluster: Uninstalling the OpenShift Update Service from a cluster 3.6.2. Mirroring OpenShift Container Platform images You must mirror container images onto a mirror registry before you can update a cluster in a disconnected environment. You can also use this procedure in connected environments to ensure your clusters run only approved container images that have satisfied your organizational controls for external content. Note Your mirror registry must be running at all times while the cluster is running. The following steps outline the high-level workflow on how to mirror images to a mirror registry: Install the OpenShift CLI ( oc ) on all devices being used to retrieve and push release images. Download the registry pull secret and add it to your cluster. If you use the oc-mirror OpenShift CLI ( oc ) plugin : Install the oc-mirror plugin on all devices being used to retrieve and push release images. Create an image set configuration file for the plugin to use when determining which release images to mirror. You can edit this configuration file later to change which release images that the plugin mirrors. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps as needed to update your mirror registry. If you use the oc adm release mirror command : Set environment variables that correspond to your environment and the release images you want to mirror. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Repeat these steps as needed to update your mirror registry. Compared to using the oc adm release mirror command, the oc-mirror plugin has the following advantages: It can mirror content other than container images. After mirroring images for the first time, it is easier to update images in the registry. The oc-mirror plugin provides an automated way to mirror the release payload from Quay, and also builds the latest graph data image for the OpenShift Update Service running in the disconnected environment. 3.6.2.1. Mirroring resources using the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity to download the required images from the official Red Hat registries. See Mirroring images for a disconnected installation using the oc-mirror plugin for additional details. 3.6.2.2. Mirroring images using the oc adm release mirror command You can use the oc adm release mirror command to mirror images to your mirror registry. 3.6.2.2.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not have an existing solution for a container image registry, the mirror registry for Red Hat OpenShift is included in OpenShift Container Platform subscriptions. The mirror registry for Red Hat OpenShift is a small-scale container registry that you can use to mirror OpenShift Container Platform container images in disconnected installations and updates. 3.6.2.2.2. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.6.2.2.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . If you are updating a cluster in a disconnected environment, install the oc version that you plan to update to. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> Additional resources Installing and using CLI plugins 3.6.2.2.2.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Optional: If using the oc-mirror plugin, save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.6.2.2.3. Mirroring images to a mirror registry Important To avoid excessive memory usage by the OpenShift Update Service application, you must mirror release images to a separate repository as described in the following procedure. Prerequisites You configured a mirror registry to use in your disconnected environment and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Use the Red Hat OpenShift Container Platform Update Graph visualizer and update planner to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. Set the required environment variables: Export the release version: USD export OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to which you want to update, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . If you are using the OpenShift Update Service, export an additional local repository name to contain the release images: USD LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>' For <local_release_images_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4-release-images . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Mirror the version images to the mirror registry. If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Mirror the images and configuration manifests to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Note This command also generates and saves the mirrored release image signature config map onto the removable media. Take the media to the disconnected environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Use oc command-line interface (CLI) to log in to the cluster that you are updating. Apply the mirrored release image signature config map to the connected cluster: USD oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1 1 For <image_signature_file> , specify the path and name of the file, for example, signature-sha256-81154f5c03294534.yaml . If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} If the local container registry and the cluster are connected to the mirror host, take the following actions: Directly push the release images to the local registry and apply the config map to the cluster by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature Note If you include the --apply-release-image-signature option, do not create the config map for image signature verification. If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} 3.6.3. Updating a cluster in a disconnected environment using the OpenShift Update Service To get an update experience similar to connected clusters, you can use the following procedures to install and configure the OpenShift Update Service (OSUS) in a disconnected environment. The following steps outline the high-level workflow on how to update a cluster in a disconnected environment using OSUS: Configure access to a secured registry. Update the global cluster pull secret to access your mirror registry. Install the OSUS Operator. Create a graph data container image for the OpenShift Update Service. Install the OSUS application and configure your clusters to use the OpenShift Update Service in your environment. Perform a supported update procedure from the documentation as you would with a connected cluster. 3.6.3.1. Using the OpenShift Update Service in a disconnected environment The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform clusters. Red Hat publicly hosts the OpenShift Update Service, and clusters in a connected environment can connect to the service through public APIs to retrieve update recommendations. However, clusters in a disconnected environment cannot access these public APIs to retrieve update information. To have a similar update experience in a disconnected environment, you can install and configure the OpenShift Update Service so that it is available within the disconnected environment. A single OSUS instance is capable of serving recommendations to thousands of clusters. OSUS can be scaled horizontally to cater to more clusters by changing the replica value. So for most disconnected use cases, one OSUS instance is enough. For example, Red Hat hosts just one OSUS instance for the entire fleet of connected clusters. If you want to keep update recommendations separate in different environments, you can run one OSUS instance for each environment. For example, in a case where you have separate test and stage environments, you might not want a cluster in a stage environment to receive update recommendations to version A if that version has not been tested in the test environment yet. The following sections describe how to install an OSUS instance and configure it to provide update recommendations to a cluster. Additional resources About the OpenShift Update Service Understanding update channels and releases 3.6.3.2. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a container image registry in your environment with the container images for your update, as described in Mirroring OpenShift Container Platform images . 3.6.3.3. Configuring access to a secured registry for the OpenShift Update Service If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in Configuring additional trust stores for image registry access along with following changes for the update service. The OpenShift Update Service Operator needs the config map key name updateservice-registry in the registry CA cert. Image registry CA config map example for the update service apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 The OpenShift Update Service Operator requires the config map key name updateservice-registry in the registry CA cert. 2 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . 3.6.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 3.6.3.5. Installing the OpenShift Update Service Operator To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. Note For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see Using Operator Lifecycle Manager on restricted networks . 3.6.3.5.1. Installing the OpenShift Update Service Operator by using the web console You can use the web console to install the OpenShift Update Service Operator. Procedure In the web console, click Operators OperatorHub . Note Enter Update Service into the Filter by keyword... field to find the Operator faster. Choose OpenShift Update Service from the list of available Operators, and click Install . Select an Update channel . Select a Version . Select A specific namespace on the cluster under Installation Mode . Select a namespace for Installed Namespace or accept the recommended namespace openshift-update-service . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a cluster administrator to approve the Operator update. Click Install . Go to Operators Installed Operators and verify that the OpenShift Update Service Operator is installed. Ensure that OpenShift Update Service is listed in the correct namespace with a Status of Succeeded . 3.6.3.5.2. Installing the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Update Service Operator. Procedure Create a namespace for the OpenShift Update Service Operator: Create a Namespace object YAML file, for example, update-service-namespace.yaml , for the OpenShift Update Service Operator: apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 1 Set the openshift.io/cluster-monitoring label to enable Operator-recommended cluster monitoring on this namespace. Create the namespace: USD oc create -f <filename>.yaml For example: USD oc create -f update-service-namespace.yaml Install the OpenShift Update Service Operator by creating the following objects: Create an OperatorGroup object YAML file, for example, update-service-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service Create an OperatorGroup object: USD oc -n openshift-update-service create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-operator-group.yaml Create a Subscription object YAML file, for example, update-service-subscription.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: "Automatic" source: "redhat-operators" 1 sourceNamespace: "openshift-marketplace" name: "cincinnati-operator" 1 Specify the name of the catalog source that provides the Operator. For clusters that do not use a custom Operator Lifecycle Manager (OLM), specify redhat-operators . If your OpenShift Container Platform cluster is installed in a disconnected environment, specify the name of the CatalogSource object created when you configured Operator Lifecycle Manager (OLM). Create the Subscription object: USD oc create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-subscription.yaml The OpenShift Update Service Operator is installed to the openshift-update-service namespace and targets the openshift-update-service namespace. Verify the Operator installation: USD oc -n openshift-update-service get clusterserviceversions Example output NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded ... If the OpenShift Update Service Operator is listed, the installation was successful. The version number might be different than shown. Additional resources Installing Operators in your namespace . 3.6.3.6. Creating the OpenShift Update Service graph data container image The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. Note The oc-mirror OpenShift CLI ( oc ) plugin creates this graph data container image in addition to mirroring release images. If you used the oc-mirror plugin to mirror your release images, you can skip this procedure. Procedure Create a Dockerfile, for example, ./Dockerfile , containing the following: FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"] Use the docker file created in the above step to build a graph data container image, for example, registry.example.com/openshift/graph-data:latest : USD podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest Push the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service, for example, registry.example.com/openshift/graph-data:latest : USD podman push registry.example.com/openshift/graph-data:latest Note To push a graph data image to a registry in a disconnected environment, copy the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service. Run oc image mirror --help for available options. 3.6.3.7. Creating an OpenShift Update Service application You can create an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.3.7.1. Creating an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to create an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. Click Create UpdateService . Enter a name in the Name field, for example, service . Enter the local pullspec in the Graph Data Image field to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest . In the Releases field, enter the registry and repository created to contain the release images in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images . Enter 2 in the Replicas field. Click Create to create the OpenShift Update Service application. Verify the OpenShift Update Service application: From the UpdateServices list in the Update Service tab, click the Update Service application just created. Click the Resources tab. Verify each application resource has a status of Created . 3.6.3.7.2. Creating an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to create an OpenShift Update Service application. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure Configure the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service The namespace must match the targetNamespaces value from the operator group. Configure the name of the OpenShift Update Service application, for example, service : USD NAME=service Configure the registry and repository for the release images as configured in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images : USD RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images Set the local pullspec for the graph data image to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest : USD GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest Create an OpenShift Update Service application object: USD oc -n "USD{NAMESPACE}" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF Verify the OpenShift Update Service application: Use the following command to obtain a policy engine route: USD while sleep 1; do POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")"; SCHEME="USD{POLICY_ENGINE_GRAPH_URI%%:*}"; if test "USD{SCHEME}" = http -o "USD{SCHEME}" = https; then break; fi; done You might need to poll until the command succeeds. Retrieve a graph from the policy engine. Be sure to specify a valid version for channel . For example, if running in OpenShift Container Platform 4.14, use stable-4.14 : USD while sleep 10; do HTTP_CODE="USD(curl --header Accept:application/json --output /dev/stderr --write-out "%{http_code}" "USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6")"; if test "USD{HTTP_CODE}" -eq 200; then break; fi; echo "USD{HTTP_CODE}"; done This polls until the graph request succeeds; however, the resulting graph might be empty depending on which release images you have mirrored. Note The policy engine route name must not be more than 63 characters based on RFC-1123. If you see ReconcileCompleted status as false with the reason CreateRouteFailed caused by host must conform to DNS 1123 naming convention and must be no more than 63 characters , try creating the Update Service with a shorter name. 3.6.3.8. Configuring the Cluster Version Operator (CVO) After the OpenShift Update Service Operator has been installed and the OpenShift Update Service application has been created, the Cluster Version Operator (CVO) can be updated to pull graph data from the OpenShift Update Service installed in your environment. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. The OpenShift Update Service application has been created. Procedure Set the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service Set the name of the OpenShift Update Service application, for example, service : USD NAME=service Obtain the policy engine route: USD POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")" Set the patch for the pull graph data: USD PATCH="{\"spec\":{\"upstream\":\"USD{POLICY_ENGINE_GRAPH_URI}\"}}" Patch the CVO to use the OpenShift Update Service in your environment: USD oc patch clusterversion version -p USDPATCH --type merge Note See Configuring the cluster-wide proxy to configure the CA to trust the update server. 3.6.3.9. steps Before updating your cluster, confirm that the following conditions are met: The Cluster Version Operator (CVO) is configured to use your installed OpenShift Update Service application. The release image signature config map for the new release is applied to your cluster. Note The Cluster Version Operator (CVO) uses release image signatures to ensure that release images have not been modified, by verifying that the release image signatures match the expected result. The current release and update target release images are mirrored to a registry in the disconnected environment. A recent graph data container image has been mirrored to your registry. A recent version of the OpenShift Update Service Operator is installed. Note If you have not recently installed or updated the OpenShift Update Service Operator, there might be a more recent version available. See Using Operator Lifecycle Manager on restricted networks for more information about how to update your OLM catalog in a disconnected environment. After you configure your cluster to use the installed OpenShift Update Service and local mirror registry, you can use any of the following update methods: Updating a cluster using the web console Updating a cluster using the CLI Performing a Control Plane Only update Performing a canary rollout update Updating a cluster that includes RHEL compute machines 3.6.4. Updating a cluster in a disconnected environment without the OpenShift Update Service Use the following procedures to update a cluster in a disconnected environment without access to the OpenShift Update Service. 3.6.4.1. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a local container image registry with the container images for your update, as described in Mirroring OpenShift Container Platform images . You must have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . You must have a recent etcd backup in case your update fails and you must restore your cluster to a state . You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Note If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. 3.6.4.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 3.6.4.3. Retrieving a release image digest In order to update a cluster in a disconnected environment using the oc adm upgrade command with the --to-image option, you must reference the sha256 digest that corresponds to your targeted release image. Procedure Run the following command on a device that is connected to the internet: USD oc adm release info -o 'jsonpath={.digest}{"\n"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE} For {OCP_RELEASE_VERSION} , specify the version of OpenShift Container Platform to which you want to update, such as 4.10.16 . For {ARCHITECTURE} , specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Example output sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d Copy the sha256 digest for use when updating your cluster. 3.6.4.4. Updating the disconnected cluster Update the disconnected cluster to the OpenShift Container Platform version that you downloaded the release images for. Note If you have a local OpenShift Update Service, you can update by using the connected web console or CLI instructions instead of this procedure. Prerequisites You mirrored the images for the new release to your registry. You applied the release image signature ConfigMap for the new release to your cluster. Note The release image signature config map allows the Cluster Version Operator (CVO) to ensure the integrity of release images by verifying that the actual image signatures match the expected signatures. You obtained the sha256 digest for your targeted release image. You installed the OpenShift CLI ( oc ). You paused all MachineHealthCheck resources. Procedure Update the cluster: USD oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest> Where: <defined_registry> Specifies the name of the mirror registry you mirrored your images to. <defined_repository> Specifies the name of the image repository you want to use on the mirror registry. <digest> Specifies the sha256 digest for the targeted release image, for example, sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92 . Note See "Mirroring OpenShift Container Platform images" to review how your mirror registry and repository names are defined. If you used an ImageContentSourcePolicy or ImageDigestMirrorSet , you can use the canonical registry and repository names instead of the names you defined. The canonical registry name is quay.io and the canonical repository name is openshift-release-dev/ocp-release . You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy , ImageDigestMirrorSet , or ImageTagMirrorSet object. You cannot add a pull secret to a project. Additional resources Mirroring OpenShift Container Platform images 3.6.4.5. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. An ICSP always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. 3.6.4.5.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use another mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in image pull specifications. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 3.6.4.5.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. 3.6.4.6. Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots You can scope the mirrored image catalog at the repository level or the wider registry level. A widely scoped ImageContentSourcePolicy resource reduces the number of times the nodes need to reboot in response to changes to the resource. To widen the scope of the mirror image catalog in the ImageContentSourcePolicy resource, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Configure a mirrored image catalog for use in your disconnected cluster. Procedure Run the following command, specifying values for <local_registry> , <pull_spec> , and <pull_secret_file> : USD oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry where: <local_registry> is the local registry you have configured for your disconnected cluster, for example, local.registry:5000 . <pull_spec> is the pull specification as configured in your disconnected registry, for example, redhat/redhat-operator-index:v4.14 <pull_secret_file> is the registry.redhat.io pull secret in .json file format. You can download the pull secret from Red Hat OpenShift Cluster Manager . The oc adm catalog mirror command creates a /redhat-operator-index-manifests directory and generates imageContentSourcePolicy.yaml , catalogSource.yaml , and mapping.txt files. Apply the new ImageContentSourcePolicy resource to the cluster: USD oc apply -f imageContentSourcePolicy.yaml Verification Verify that oc apply successfully applied the change to ImageContentSourcePolicy : USD oc get ImageContentSourcePolicy -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.openshift.io/v1alpha1","kind":"ImageContentSourcePolicy","metadata":{"annotations":{},"name":"redhat-operator-index"},"spec":{"repositoryDigestMirrors":[{"mirrors":["local.registry:5000"],"source":"registry.redhat.io"}]}} ... After you update the ImageContentSourcePolicy resource, OpenShift Container Platform deploys the new settings to each node and the cluster starts using the mirrored repository for requests to the source repository. 3.6.4.7. Additional resources Using Operator Lifecycle Manager on restricted networks Machine Config Overview 3.6.5. Uninstalling the OpenShift Update Service from a cluster To remove a local copy of the OpenShift Update Service (OSUS) from your cluster, you must first delete the OSUS application and then uninstall the OSUS Operator. 3.6.5.1. Deleting an OpenShift Update Service application You can delete an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 3.6.5.1.1. Deleting an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to delete an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. From the list of installed OpenShift Update Service applications, select the application to be deleted and then click Delete UpdateService . From the Delete UpdateService? confirmation dialog, click Delete to confirm the deletion. 3.6.5.1.2. Deleting an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to delete an OpenShift Update Service application. Procedure Get the OpenShift Update Service application name using the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc get updateservice -n openshift-update-service Example output NAME AGE service 6s Delete the OpenShift Update Service application using the NAME value from the step and the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc delete updateservice service -n openshift-update-service Example output updateservice.updateservice.operator.openshift.io "service" deleted 3.6.5.2. Uninstalling the OpenShift Update Service Operator You can uninstall the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. 3.6.5.2.1. Uninstalling the OpenShift Update Service Operator by using the web console You can use the OpenShift Container Platform web console to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure In the web console, click Operators Installed Operators . Select OpenShift Update Service from the list of installed Operators and click Uninstall Operator . From the Uninstall Operator? confirmation dialog, click Uninstall to confirm the uninstallation. 3.6.5.2.2. Uninstalling the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure Change to the project containing the OpenShift Update Service Operator, for example, openshift-update-service : USD oc project openshift-update-service Example output Now using project "openshift-update-service" on server "https://example.com:6443". Get the name of the OpenShift Update Service Operator operator group: USD oc get operatorgroup Example output NAME AGE openshift-update-service-fprx2 4m41s Delete the operator group, for example, openshift-update-service-fprx2 : USD oc delete operatorgroup openshift-update-service-fprx2 Example output operatorgroup.operators.coreos.com "openshift-update-service-fprx2" deleted Get the name of the OpenShift Update Service Operator subscription: USD oc get subscription Example output NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1 Using the Name value from the step, check the current version of the subscribed OpenShift Update Service Operator in the currentCSV field: USD oc get subscription update-service-operator -o yaml | grep " currentCSV" Example output currentCSV: update-service-operator.v0.0.1 Delete the subscription, for example, update-service-operator : USD oc delete subscription update-service-operator Example output subscription.operators.coreos.com "update-service-operator" deleted Delete the CSV for the OpenShift Update Service Operator using the currentCSV value from the step: USD oc delete clusterserviceversion update-service-operator.v0.0.1 Example output clusterserviceversion.operators.coreos.com "update-service-operator.v0.0.1" deleted 3.7. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Version 4.14 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. Before upgrading OpenShift 4.12 to OpenShift 4.13, you must update vSphere to v7.0.2 or later ; otherwise, the OpenShift 4.12 cluster is marked un-upgradeable . 3.7.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. Important As of OpenShift Container Platform 4.13, VMware virtual hardware version 13 is no longer supported. You need to update to VMware version 15 or later for supporting functionality. 3.7.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.27.3 control-plane-node-1 Ready master 75m v1.27.3 control-plane-node-2 Ready master 75m v1.27.3 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 3.7.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be updated serially. Note Multiple compute nodes can be updated in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.27.3 compute-node-1 Ready worker 30m v1.27.3 compute-node-2 Ready worker 30m v1.27.3 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 3.7.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the virtual machine (VM) in the VMware vSphere client. Complete the steps outlined in Upgrade the Compatibility of a Virtual Machine Manually (VMware vSphere documentation). Convert the VM in the vSphere client to a template by right-clicking on the VM and then selecting Template Convert to Template . Important The steps for converting a VM to a template might change in future vSphere documentation versions. Additional resources Understanding how to evacuate pods on nodes 3.7.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an update prior to performing an update of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform update. 3.8. Migrating to a cluster with multi-architecture compute machines You can migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines by updating to a multi-architecture, manifest-listed payload. This allows you to add mixed architecture compute nodes to your cluster. For information about configuring your multi-architecture compute machines, see Configuring multi-architecture compute machines on an OpenShift Container Platform cluster . Important Migration from a multi-architecture payload to a single-architecture payload is not supported. Once a cluster has transitioned to using a multi-architecture payload, it can no longer accept a single-architecture update payload. 3.8.1. Migrating to a cluster with multi-architecture compute machines using the CLI Prerequisites You have access to the cluster as a user with the cluster-admin role. Your OpenShift Container Platform version is up to date to at least version 4.13.0. For more information on how to update your cluster version, see Updating a cluster using the web console or Updating a cluster using the CLI . You have installed the OpenShift CLI ( oc ) that matches the version for your current cluster. Your oc client is updated to at least verion 4.13.0. Your OpenShift Container Platform cluster is installed on AWS, Azure, GCP, bare metal or IBM P/Z platforms. For more information on selecting a supported platform for your cluster installation, see Selecting a cluster installation type . Procedure Verify that the RetrievedUpdates condition is True in the Cluster Version Operator (CVO) by running the following command: USD oc get clusterversion/version -o=jsonpath="{.status.conditions[?(.type=='RetrievedUpdates')].status}" If the RetrievedUpates condition is False , you can find supplemental information regarding the failure by using the following command: USD oc adm upgrade For more information about cluster version condition types, see Understanding cluster version condition types . If the condition RetrievedUpdates is False , change the channel to stable-<4.y> or fast-<4.y> with the following command: USD oc adm upgrade channel <channel> After setting the channel, verify if RetrievedUpdates is True . For more information about channels, see Understanding update channels and releases . Migrate to the multi-architecture payload with following command: USD oc adm upgrade --to-multi-arch Verification You can monitor the migration by running the following command: USD oc adm upgrade Important Machine launches may fail as the cluster settles into the new state. To notice and recover when machines fail to launch, we recommend deploying machine health checks. For more information about machine health checks and how to deploy them, see About machine health checks . The migrations must be complete and all the cluster operators must be stable before you can add compute machine sets with different architectures to your cluster. Additional resources Configuring multi-architecture compute machines on an OpenShift Container Platform cluster Updating a cluster using the web console Updating a cluster using the CLI Understanding cluster version condition types Understanding update channels and releases Selecting a cluster installation type About machine health checks 3.9. Updating hosted control planes On hosted control planes for OpenShift Container Platform, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node updates. 3.9.1. Updates for the hosted cluster The spec.release value dictates the version of the control plane. The HostedCluster object transmits the intended spec.release value to the HostedControlPlane.spec.release value and runs the appropriate Control Plane Operator version. The hosted control plane manages the rollout of the new version of the control plane components along with any OpenShift Container Platform components through the new version of the Cluster Version Operator (CVO). Important In hosted control planes, the NodeHealthCheck resource cannot detect the status of the CVO. A cluster administrator must manually pause the remediation triggered by NodeHealthCheck , before performing critical operations, such as updating the cluster, to prevent new remediation actions from interfering with cluster updates. To pause the remediation, enter the array of strings, for example, pause-test-cluster , as a value of the pauseRequests field in the NodeHealthCheck resource. For more information, see About the Node Health Check Operator . After the cluster update is complete, you can edit or delete the remediation. Navigate to the Compute NodeHealthCheck page, click your node health check, and then click Actions , which shows a drop-down list. 3.9.2. Updates for node pools With node pools, you can configure the software that is running in the nodes by exposing the spec.release and spec.config values. You can start a rolling node pool update in the following ways: Changing the spec.release or spec.config values. Changing any platform-specific field, such as the AWS instance type. The result is a set of new instances with the new type. Changing the cluster configuration, if the change propagates to the node. Node pools support replace updates and in-place updates. The nodepool.spec.release value dictates the version of any particular node pool. A NodePool object completes a replace or an in-place rolling update according to the .spec.management.upgradeType value. After you create a node pool, you cannot change the update type. If you want to change the update type, you must create a node pool and delete the other one. 3.9.2.1. Replace updates for node pools A replace update creates instances in the new version while it removes old instances from the version. This update type is effective in cloud environments where this level of immutability is cost effective. Replace updates do not preserve any manual changes because the node is entirely re-provisioned. 3.9.2.2. In place updates for node pools An in-place update directly updates the operating systems of the instances. This type is suitable for environments where the infrastructure constraints are higher, such as bare metal. In-place updates can preserve manual changes, but will report errors if you make manual changes to any file system or operating system configuration that the cluster directly manages, such as kubelet certificates. 3.9.3. Configuring node pools for hosted control planes On hosted control planes, you can configure node pools by creating a MachineConfig object inside of a config map in the management cluster. Procedure To create a MachineConfig object inside of a config map in the management cluster, enter the following information: apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: USD{PATH} 1 1 Sets the path on the node where the MachineConfig object is stored. After you add the object to the config map, you can apply the config map to the node pool as follows: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 3.10. Updating the boot loader on RHCOS nodes using bootupd To update the boot loader on RHCOS nodes using bootupd , you must either run the bootupctl update command on RHCOS machines manually or provide a machine config with a systemd unit. Unlike grubby or other boot loader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. To configure kernel arguments, see Adding kernel arguments to nodes . Note You can use bootupd to update the boot loader to protect against the BootHole vulnerability. 3.10.1. Updating the boot loader manually You can manually inspect the status of the system and update the boot loader by using the bootupctl command-line tool. Inspect the system status: # bootupctl status Example output for x86_64 Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version Example output for aarch64 Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version OpenShift Container Platform clusters initially installed on version 4.4 and older require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 3.10.2. Updating the bootloader automatically via a machine config Another way to automatically update the boot loader with bootupd is to create a systemd service unit that will update the boot loader as needed on every boot. This unit will run the bootupctl update command during the boot process and will be installed on the nodes via a machine config. Note This configuration is not enabled by default as unexpected interruptions of the update operation may lead to unbootable nodes. If you enable this configuration, make sure to avoid interrupting nodes during the boot process while the bootloader update is in progress. The boot loader update operation generally completes quickly thus the risk is low. Create a Butane config file, 99-worker-bootupctl-update.bu , including the contents of the bootupctl-update.service systemd unit. Note See "Creating machine configs with Butane" for information about Butane. Example output variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 1 2 On control plane nodes, substitute master for worker in both of these locations. Use Butane to generate a MachineConfig object file, 99-worker-bootupctl-update.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-bootupctl-update.yaml
|
[
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm upgrade",
"Cluster version is 4.13.10 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.13 (available channels: candidate-4.13, candidate-4.14, fast-4.13, stable-4.13) Recommended updates: VERSION IMAGE 4.13.14 quay.io/openshift-release-dev/ocp-release@sha256:406fcc160c097f61080412afcfa7fd65284ac8741ac7ad5b480e304aba73674b 4.13.13 quay.io/openshift-release-dev/ocp-release@sha256:d62495768e335c79a215ba56771ff5ae97e3cbb2bf49ed8fb3f6cefabcdc0f17 4.13.12 quay.io/openshift-release-dev/ocp-release@sha256:73946971c03b43a0dc6f7b0946b26a177c2f3c9d37105441315b4e3359373a55 4.13.11 quay.io/openshift-release-dev/ocp-release@sha256:e1c2377fdae1d063aaddc753b99acf25972b6997ab9a0b7e80cfef627b9ef3dd",
"oc adm upgrade channel <channel>",
"oc adm upgrade channel stable-4.14",
"oc adm upgrade --to-latest=true 1",
"oc adm upgrade --to=<version> 1",
"oc adm upgrade",
"oc adm upgrade",
"Cluster version is <version> Upstream is unset, so the cluster will use an appropriate default. Channel: stable-<version> (available channels: candidate-<version>, eus-<version>, fast-<version>, stable-<version>) No updates available. You may force an update to a specific release image, but doing so might not be supported and might result in downtime or data loss.",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.27.3 ip-10-0-170-223.ec2.internal Ready master 82m v1.27.3 ip-10-0-179-95.ec2.internal Ready worker 70m v1.27.3 ip-10-0-182-134.ec2.internal Ready worker 70m v1.27.3 ip-10-0-211-16.ec2.internal Ready master 82m v1.27.3 ip-10-0-250-100.ec2.internal Ready worker 69m v1.27.3",
"oc adm upgrade --include-not-recommended",
"oc adm upgrade --allow-not-recommended --to <version> <.>",
"oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge",
"clusterversion.config.openshift.io/version patched",
"spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc adm upgrade channel eus-<4.y+2>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade --to-latest",
"Updating to latest version <4.y+1.z>",
"oc adm upgrade",
"Cluster version is <4.y+1.z>",
"oc adm upgrade --to-latest",
"oc adm upgrade",
"Cluster version is <4.y+2.z>",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False",
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] 2 } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf] } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf: \"\"",
"oc create -f machineConfigPool.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf created",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-b node-role.kubernetes.io/worker-perf=''",
"oc label node worker-c node-role.kubernetes.io/worker-perf=''",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker-perf name: 06-kdump-enable-worker-perf spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"oc create -f new-machineconfig.yaml",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-perf-canary spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-perf,worker-perf-canary] 1 } nodeSelector: matchLabels: node-role.kubernetes.io/worker-perf-canary: \"\"",
"oc create -f machineConfigPool-Canary.yaml",
"machineconfigpool.machineconfiguration.openshift.io/worker-perf-canary created",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2bf1379b39e22bae858ea1a3ff54b2ac True False False 3 3 3 0 5d16h worker rendered-worker-b9576d51e030413cfab12eb5b9841f34 True False False 0 0 0 0 5d16h worker-perf rendered-worker-perf-b98a1f62485fa702c4329d17d9364f6a True False False 2 2 2 0 56m worker-perf-canary rendered-worker-perf-canary-b98a1f62485fa702c4329d17d9364f6a True False False 1 1 1 0 44m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION worker-a Ready worker,worker-perf-canary 5d15h v1.27.13+e709aa5 worker-b Ready worker,worker-perf 5d15h v1.27.13+e709aa5 worker-c Ready worker,worker-perf 5d15h v1.27.13+e709aa5",
"systemctl status kdump.service",
"NAME STATUS ROLES AGE VERSION kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; preset: disabled) Active: active (exited) since Tue 2024-09-03 12:44:43 UTC; 10s ago Process: 4151139 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 4151139 (code=exited, status=0/SUCCESS)",
"cat /proc/cmdline",
"crashkernel=512M",
"oc label node worker-a node-role.kubernetes.io/worker-perf=''",
"oc label node worker-a node-role.kubernetes.io/worker-perf-canary-",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc get machineconfigpools",
"oc label node <node_name> node-role.kubernetes.io/<custom_label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>",
"--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"",
"[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml",
"systemctl disable --now firewalld.service",
"subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms --enable=rhocp-4.14-for-rhel-8-x86_64-rpms",
"yum swap ansible ansible-core",
"yum update openshift-ansible openshift-clients",
"subscription-manager repos --disable=rhocp-4.13-for-rhel-8-x86_64-rpms --enable=rhocp-4.14-for-rhel-8-x86_64-rpms",
"[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1",
"oc get node",
"NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.27.3 mycluster-control-plane-1 Ready master 145m v1.27.3 mycluster-control-plane-2 Ready master 145m v1.27.3 mycluster-rhel8-0 Ready worker 98m v1.27.3 mycluster-rhel8-1 Ready worker 98m v1.27.3 mycluster-rhel8-2 Ready worker 98m v1.27.3 mycluster-rhel8-3 Ready worker 98m v1.27.3",
"yum update",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"mkdir -p <directory_name>",
"cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"export OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature",
"oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1",
"oc create -f <filename>.yaml",
"oc create -f update-service-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service",
"oc -n openshift-update-service create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"",
"oc create -f <filename>.yaml",
"oc -n openshift-update-service create -f update-service-subscription.yaml",
"oc -n openshift-update-service get clusterserviceversions",
"NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded",
"FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]",
"podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest",
"podman push registry.example.com/openshift/graph-data:latest",
"NAMESPACE=openshift-update-service",
"NAME=service",
"RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images",
"GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest",
"oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF",
"while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done",
"while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done",
"NAMESPACE=openshift-update-service",
"NAME=service",
"POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"",
"PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"",
"oc patch clusterversion version -p USDPATCH --type merge",
"oc get machinehealthcheck -n openshift-machine-api",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5",
"oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-",
"oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}",
"sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d",
"oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>",
"skopeo copy docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.redhat.io 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.28.5 ip-10-0-138-148.ec2.internal Ready master 11m v1.28.5 ip-10-0-139-122.ec2.internal Ready master 11m v1.28.5 ip-10-0-147-35.ec2.internal Ready worker 7m v1.28.5 ip-10-0-153-12.ec2.internal Ready worker 7m v1.28.5 ip-10-0-154-10.ec2.internal Ready master 11m v1.28.5",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml",
"oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry",
"oc apply -f imageContentSourcePolicy.yaml",
"oc get ImageContentSourcePolicy -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}",
"oc get updateservice -n openshift-update-service",
"NAME AGE service 6s",
"oc delete updateservice service -n openshift-update-service",
"updateservice.updateservice.operator.openshift.io \"service\" deleted",
"oc project openshift-update-service",
"Now using project \"openshift-update-service\" on server \"https://example.com:6443\".",
"oc get operatorgroup",
"NAME AGE openshift-update-service-fprx2 4m41s",
"oc delete operatorgroup openshift-update-service-fprx2",
"operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted",
"oc get subscription",
"NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1",
"oc get subscription update-service-operator -o yaml | grep \" currentCSV\"",
"currentCSV: update-service-operator.v0.0.1",
"oc delete subscription update-service-operator",
"subscription.operators.coreos.com \"update-service-operator\" deleted",
"oc delete clusterserviceversion update-service-operator.v0.0.1",
"clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.27.3 control-plane-node-1 Ready master 75m v1.27.3 control-plane-node-2 Ready master 75m v1.27.3",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.27.3 compute-node-1 Ready worker 30m v1.27.3 compute-node-2 Ready worker 30m v1.27.3",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>",
"oc get clusterversion/version -o=jsonpath=\"{.status.conditions[?(.type=='RetrievedUpdates')].status}\"",
"oc adm upgrade",
"oc adm upgrade channel <channel>",
"oc adm upgrade --to-multi-arch",
"oc adm upgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64 Update: At latest version",
"Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.el8_4.1.x86_64,shim-x64-15-8.el8_1.x86_64",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 systemd: units: - name: bootupctl-update.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"butane 99-worker-bootupctl-update.bu -o 99-worker-bootupctl-update.yaml",
"oc apply -f ./99-worker-bootupctl-update.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/updating_clusters/performing-a-cluster-update
|
Appendix H. Revision History
|
Appendix H. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 1.5-6 Wed Aug 07 2019 Eliane Pereira Revision 1.5-5 Thu Jul 11 2019 Eliane Pereira Image Builder has been removed and split to its own Guide Revision 1.5-4 Fri May 24 2019 Sharon Moroney Preparing document for 7.7 Beta publication. Revision 1.5-3 Tue Oct 30 2018 Vladimir Slavik Preparing document for 7.6 GA publication. Revision 1.5-2 Tue Aug 21 2018 Vladimir Slavik Preparing document for 7.6 Beta publication. Revision 1.5-1 Fri Apr 6 2018 Petr Bokoc Preparing document for 7.5 GA publication. Revision 1.5-0 Fri Dec 15 2017 Petr Bokoc Preparing document for 7.5 Beta publication. Revision 1.4-2 Thu Nov 23 2017 Petr Bokoc Asynchronous update. Revision 1.4-1 Fri Oct 13 2017 Petr Bokoc Asynchronous update. Revision 1.4-0 Tue Aug 1 2017 Petr Bokoc Preparing document for 7.4 GA publication. Revision 1.3-9 Mon May 15 2017 Petr Bokoc Preparing document for 7.4 Beta publication. Revision 1.3-8 Tue Apr 4 2017 Petr Bokoc Asynchronous update. Revision 1.3-7 Sun Nov 6 2016 Robert Kratky Version for 7.3 GA publication. Revision 1.3-4 Mon Nov 16 2015 Petr Bokoc Version for 7.2 GA publication. Revision 1.2-2 Wed Feb 18 2015 Petr Bokoc Version for 7.1 GA publication. Revision 1.0-0 Tue Jun 03 2014 Petr Bokoc Version for 7.0 GA publication.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/appe-publican-revision_history
|
21.3. Creating virsh Dump Files
|
21.3. Creating virsh Dump Files Executing a virsh dump command sends a request to dump the core of a guest virtual machine to a file so errors in the virtual machine can be diagnosed. Running this command may require you to manually ensure proper permissions on file and path specified by the argument corefilepath . The virsh dump command is similar to a coredump (or the crash utility). To create the virsh dump file, run: While the domain (guest virtual machine domain name) and corefilepath (location of the newly created core dump file) are mandatory, the following arguments are optional: --live creates a dump file on a running machine and does not pause it. --crash stops the guest virtual machine and generates the dump file. The main difference is that the guest virtual machine will not be listed as Stopped, with the reason as Crashed. Note that in virt-manager the status will be listed as Paused. --reset will reset the guest virtual machine following a successful dump. Note, these three switches are mutually exclusive. --bypass-cache uses O_DIRECT to bypass the file system cache. --memory-only the dump file will be saved as an elf file, and will only include domain's memory and cpu common register value. This option is very useful if the domain uses host devices directly. --verbose displays the progress of the dump The entire dump process may be monitored using virsh domjobinfo command and can be canceled by running virsh domjobabort .
|
[
"virsh dump <domain> <corefilepath> [--bypass-cache] { [--live] | [--crash] | [--reset] } [--verbose] [--memory-only]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-vish-dump
|
Chapter 12. Scheduling resources
|
Chapter 12. Scheduling resources 12.1. Using node selectors to move logging resources A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 12.1.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 12.1.2. Moving logging resources You can configure the Red Hat OpenShift Logging Operator to deploy the pods for logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites You have installed the Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Example ClusterLogging CR apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... spec: logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana # ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.24.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... spec: # ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m collector-42dzz 1/1 Running 0 28m collector-d74rq 1/1 Running 0 28m collector-m5vr9 1/1 Running 0 28m collector-nkxl7 1/1 Running 0 28m collector-pdvqb 1/1 Running 0 28m collector-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m collector-42dzz 1/1 Running 0 29m collector-d74rq 1/1 Running 0 29m collector-m5vr9 1/1 Running 0 29m collector-nkxl7 1/1 Running 0 29m collector-pdvqb 1/1 Running 0 29m collector-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 12.1.3. Additional resources Placing pods on specific nodes using node selectors 12.2. Using taints and tolerations to control logging pod placement Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 12.2.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 12.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 12.2.2. Using tolerations to control log store pod placement By default, log store pods have the following toleration configurations: Elasticsearch log store pods default tolerations apiVersion: v1 kind: Pod metadata: name: elasticsearch-example namespace: openshift-logging spec: # ... tolerations: - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists # ... LokiStack log store pods default tolerations apiVersion: v1 kind: Pod metadata: name: lokistack-example namespace: openshift-logging spec: # ... tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists # ... You can configure a toleration for log store pods by adding a taint and then modifying the tolerations syntax in the ClusterLogging custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have installed the OpenShift CLI ( oc ). You have deployed an internal log store that is either Elasticsearch or LokiStack. Procedure Add a taint to a node where you want to schedule the logging pods, by running the following command: USD oc adm taint nodes <node_name> <key>=<value>:<effect> Example command USD oc adm taint nodes node1 lokistack=node:NoExecute This example places a taint on node1 that has key lokistack , value node , and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that match the taint and remove existing pods that do not match. Edit the logstore section of the ClusterLogging CR to configure a toleration for the log store pods: Example ClusterLogging CR apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: lokistack elasticsearch: nodeCount: 1 tolerations: - key: lokistack 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 # ... 1 Specify the key that you added to the node. 2 Specify the Exists operator to require a taint with the key lokistack to be present on the node. 3 Specify the NoExecute effect. 4 Optional: Specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto node1 . 12.2.3. Using tolerations to control the log visualizer pod placement You can use a specific key/value pair that is not on other pods to ensure that only the Kibana pod can run on the specified node. Prerequisites You have installed the Red Hat OpenShift Logging Operator, the OpenShift Elasticsearch Operator, and the OpenShift CLI ( oc ). Procedure Add a taint to a node where you want to schedule the log visualizer pod by running the following command: USD oc adm taint nodes <node_name> <key>=<value>:<effect> Example command USD oc adm taint nodes node1 kibana=node:NoExecute This example places a taint on node1 that has key kibana , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... visualization: type: kibana kibana: tolerations: - key: kibana 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 # ... 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key , value, and effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 12.2.4. Using tolerations to control log collector pod placement By default, log collector pods have the following tolerations configuration: apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: # ... collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists # ... Prerequisites You have installed the Red Hat OpenShift Logging Operator and OpenShift CLI ( oc ). Procedure Add a taint to a node where you want logging collector pods to schedule logging collector pods by running the following command: USD oc adm taint nodes <node_name> <key>=<value>:<effect> Example command USD oc adm taint nodes node1 collector=node:NoExecute This example places a taint on node1 that has key collector , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi # ... 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto node1 . 12.2.5. Additional resources Controlling pod placement using node taints
|
[
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.24.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m collector-42dzz 1/1 Running 0 28m collector-d74rq 1/1 Running 0 28m collector-m5vr9 1/1 Running 0 28m collector-nkxl7 1/1 Running 0 28m collector-pdvqb 1/1 Running 0 28m collector-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m collector-42dzz 1/1 Running 0 29m collector-d74rq 1/1 Running 0 29m collector-m5vr9 1/1 Running 0 29m collector-nkxl7 1/1 Running 0 29m collector-pdvqb 1/1 Running 0 29m collector-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: elasticsearch-example namespace: openshift-logging spec: tolerations: - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists",
"apiVersion: v1 kind: Pod metadata: name: lokistack-example namespace: openshift-logging spec: tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 lokistack=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: lokistack elasticsearch: nodeCount: 1 tolerations: - key: lokistack 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: kibana kibana: tolerations: - key: kibana 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/scheduling-resources
|
function::qsq_print
|
function::qsq_print Name function::qsq_print - Prints a line of statistics for the given queue Synopsis Arguments qname queue name Description This function prints a line containing the following statistics for the given queue the queue name, the average rate of requests per second, the average wait queue length, the average time on the wait queue, the average time to service a request, the percentage of time the wait queue was used, and the percentage of time request was being serviced.
|
[
"qsq_print(qname:string)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-qsq-print
|
Chapter 361. uniVocity TSV DataFormat
|
Chapter 361. uniVocity TSV DataFormat Available as of Camel version 2.15 This Data Format uses uniVocity-parsers for reading and writing 3 kinds of tabular data text files: CSV (Comma Separated Values), where the values are separated by a symbol (usually a comma) fixed-width, where the values have known sizes TSV (Tabular Separated Values), where the fields are separated by a tabulation Thus there are 3 data formats based on uniVocity-parsers. If you use Maven you can just add the following to your pom.xml, substituting the version number for the latest and greatest release (see the download page for the latest versions ). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-univocity-parsers</artifactId> <version>x.x.x</version> </dependency> 361.1. Options Most configuration options of the uniVocity-parsers are available in the data formats. If you want more information about a particular option, please refer to their documentation page . The 3 data formats share common options and have dedicated ones, this section presents them all. 361.2. Options The uniVocity TSV dataformat supports 15 options, which are listed below. Name Default Java Type Description escapeChar \ String The escape character. nullValue String The string representation of a null value. The default value is null skipEmptyLines true Boolean Whether or not the empty lines must be ignored. The default value is true ignoreTrailingWhitespaces true Boolean Whether or not the trailing white spaces must ignored. The default value is true ignoreLeadingWhitespaces true Boolean Whether or not the leading white spaces must be ignored. The default value is true headersDisabled false Boolean Whether or not the headers are disabled. When defined, this option explicitly sets the headers as null which indicates that there is no header. The default value is false headerExtractionEnabled false Boolean Whether or not the header must be read in the first line of the test document The default value is false numberOfRecordsToRead Integer The maximum number of record to read. emptyValue String The String representation of an empty value lineSeparator String The line separator of the files The default value is to use the JVM platform line separator normalizedLineSeparator String The normalized line separator of the files The default value is a new line character. comment # String The comment symbol. The default value is # lazyLoad false Boolean Whether the unmarshalling should produce an iterator that reads the lines on the fly or if all the lines must be read at one. The default value is false asMap false Boolean Whether the unmarshalling should produce maps for the lines values instead of lists. It requires to have header (either defined or collected). The default value is false contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 361.3. Spring Boot Auto-Configuration The component supports 16 options, which are listed below. Name Description Default Type camel.dataformat.univocity-tsv.as-map Whether the unmarshalling should produce maps for the lines values instead of lists. It requires to have header (either defined or collected). The default value is false false Boolean camel.dataformat.univocity-tsv.comment The comment symbol. The default value is # # String camel.dataformat.univocity-tsv.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.univocity-tsv.empty-value The String representation of an empty value String camel.dataformat.univocity-tsv.enabled Enable univocity-tsv dataformat true Boolean camel.dataformat.univocity-tsv.escape-char The escape character. \ String camel.dataformat.univocity-tsv.header-extraction-enabled Whether or not the header must be read in the first line of the test document The default value is false false Boolean camel.dataformat.univocity-tsv.headers-disabled Whether or not the headers are disabled. When defined, this option explicitly sets the headers as null which indicates that there is no header. The default value is false false Boolean camel.dataformat.univocity-tsv.ignore-leading-whitespaces Whether or not the leading white spaces must be ignored. The default value is true true Boolean camel.dataformat.univocity-tsv.ignore-trailing-whitespaces Whether or not the trailing white spaces must ignored. The default value is true true Boolean camel.dataformat.univocity-tsv.lazy-load Whether the unmarshalling should produce an iterator that reads the lines on the fly or if all the lines must be read at one. The default value is false false Boolean camel.dataformat.univocity-tsv.line-separator The line separator of the files The default value is to use the JVM platform line separator String camel.dataformat.univocity-tsv.normalized-line-separator The normalized line separator of the files The default value is a new line character. String camel.dataformat.univocity-tsv.null-value The string representation of a null value. The default value is null String camel.dataformat.univocity-tsv.number-of-records-to-read The maximum number of record to read. Integer camel.dataformat.univocity-tsv.skip-empty-lines Whether or not the empty lines must be ignored. The default value is true true Boolean 361.4. Marshalling usages The marshalling accepts either: A list of maps (L`ist<Map<String, ?>>`), one for each line A single map ( Map<String, ?> ), for a single line Any other body will throws an exception. 361.4.1. Usage example: marshalling a Map into CSV format <route> <from uri="direct:input"/> <marshal> <univocity-csv/> </marshal> <to uri="mock:result"/> </route> 361.4.2. Usage example: marshalling a Map into fixed-width format <route> <from uri="direct:input"/> <marshal> <univocity-fixed padding="_"> <univocity-header length="5"/> <univocity-header length="5"/> <univocity-header length="5"/> </univocity-fixed> </marshal> <to uri="mock:result"/> </route> 361.4.3. Usage example: marshalling a Map into TSV format <route> <from uri="direct:input"/> <marshal> <univocity-tsv/> </marshal> <to uri="mock:result"/> </route> 361.5. Unmarshalling usages The unmarshalling uses an InputStream in order to read the data. Each row produces either: a list with all the values in it ( asMap option with false ); A map with all the values indexed by the headers ( asMap option with true ). All the rows can either: be collected at once into a list ( lazyLoad option with false ); be read on the fly using an iterator ( lazyLoad option with true ). 361.5.1. Usage example: unmarshalling a CSV format into maps with automatic headers <route> <from uri="direct:input"/> <unmarshal> <univocity-csv headerExtractionEnabled="true" asMap="true"/> </unmarshal> <to uri="mock:result"/> </route> 361.5.2. Usage example: unmarshalling a fixed-width format into lists <route> <from uri="direct:input"/> <unmarshal> <univocity-fixed> <univocity-header length="5"/> <univocity-header length="5"/> <univocity-header length="5"/> </univocity-fixed> </unmarshal> <to uri="mock:result"/> </route>
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-univocity-parsers</artifactId> <version>x.x.x</version> </dependency>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-csv/> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-fixed padding=\"_\"> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> </univocity-fixed> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-tsv/> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <unmarshal> <univocity-csv headerExtractionEnabled=\"true\" asMap=\"true\"/> </unmarshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <unmarshal> <univocity-fixed> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> </univocity-fixed> </unmarshal> <to uri=\"mock:result\"/> </route>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/univocity-tsv-dataformat
|
Chapter 1. Web Console Overview
|
Chapter 1. Web Console Overview The Red Hat OpenShift Container Platform web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for OpenShift Container Platform. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in OpenShift Container Platform. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Container Platform by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Learn more about Cluster Administrator Overview of the Administrator perspective Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view Viewing cluster information Configuring the web console Customizing the web console About the web console Using the web terminal Creating quick start tutorials Disabling the web console
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/web-console-overview
|
Chapter 2. Node [v1]
|
Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.images Description List of container images on this node Type array 2.1.21. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.22. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.23. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.24. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes HTTP method DELETE Description delete collection of Node Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.3. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Node schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Node HTTP method DELETE Description delete a Node Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Node schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description read status of the specified Node Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body Node schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/node_apis/node-v1
|
Chapter 1. About CI/CD
|
Chapter 1. About CI/CD OpenShift Dedicated is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Dedicated provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps Jenkins 1.1. OpenShift Builds OpenShift Builds provides you the following options to configure and run a build: Builds using Shipwright is an extensible build framework based on the Shipwright project. You can use it to build container images on an OpenShift Dedicated cluster. You can build container images from source code and Dockerfile by using image build tools, such as Source-to-Image (S2I) and Buildah. For more information, see builds for Red Hat OpenShift . Builds using BuildConfig objects is a declarative build process to create cloud-native apps. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object builds a runnable image and pushes the image to a container image registry. With the BuildConfig object, you can create a Docker, Source-to-image (S2I), or custom build. For more information, see Understanding image builds . 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Red Hat OpenShift Pipelines . 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Red Hat OpenShift GitOps . 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Dedicated. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. For more information, see Configuring Jenkins images .
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cicd_overview/ci-cd-overview
|
4.11. Information Gathering Tools
|
4.11. Information Gathering Tools The utilities listed below are command-line tools that provide well-formatted information, such as access vector cache statistics or the number of classes, types, or Booleans. avcstat This command provides a short output of the access vector cache statistics since boot. You can watch the statistics in real time by specifying a time interval in seconds. This provides updated statistics since the initial output. The statistics file used is /sys/fs/selinux/avc/cache_stats , and you can specify a different cache file with the -f /path/to/file option. seinfo This utility is useful in describing the break-down of a policy, such as the number of classes, types, Booleans, allow rules, and others. seinfo is a command-line utility that uses a policy.conf file, a binary policy file, a modular list of policy packages, or a policy list file as input. You must have the setools-console package installed to use the seinfo utility. The output of seinfo will vary between binary and source files. For example, the policy source file uses the { } brackets to group multiple rule elements onto a single line. A similar effect happens with attributes, where a single attribute expands into one or many types. Because these are expanded and no longer relevant in the binary policy file, they have a return value of zero in the search results. However, the number of rules greatly increases as each formerly one line rule using brackets is now a number of individual lines. Some items are not present in the binary policy. For example, neverallow rules are only checked during policy compile, not during runtime, and initial Security Identifiers (SIDs) are not part of the binary policy since they are required prior to the policy being loaded by the kernel during boot. The seinfo utility can also list the number of types with the domain attribute, giving an estimate of the number of different confined processes: Not all domain types are confined. To look at the number of unconfined domains, use the unconfined_domain attribute: Permissive domains can be counted with the --permissive option: Remove the additional | wc -l command in the above commands to see the full lists. sesearch You can use the sesearch utility to search for a particular rule in the policy. It is possible to search either policy source files or the binary file. For example: The sesearch utility can provide the number of allow rules: And the number of dontaudit rules:
|
[
"~]# avcstat lookups hits misses allocs reclaims frees 47517410 47504630 12780 12780 12176 12275",
"~]# seinfo Statistics for policy file: /sys/fs/selinux/policy Policy Version & Type: v.28 (binary, mls) Classes: 77 Permissions: 229 Sensitivities: 1 Categories: 1024 Types: 3001 Attributes: 244 Users: 9 Roles: 13 Booleans: 158 Cond. Expr.: 193 Allow: 262796 Neverallow: 0 Auditallow: 44 Dontaudit: 156710 Type_trans: 10760 Type_change: 38 Type_member: 44 Role allow: 20 Role_trans: 237 Range_trans: 2546 Constraints: 62 Validatetrans: 0 Initial SIDs: 27 Fs_use: 22 Genfscon: 82 Portcon: 373 Netifcon: 0 Nodecon: 0 Permissives: 22 Polcap: 2",
"~]# seinfo -adomain -x | wc -l 550",
"~]# seinfo -aunconfined_domain_type -x | wc -l 52",
"~]# seinfo --permissive -x | wc -l 31",
"~]USD sesearch --role_allow -t httpd_sys_content_t Found 20 role allow rules: allow system_r sysadm_r; allow sysadm_r system_r; allow sysadm_r staff_r; allow sysadm_r user_r; allow system_r git_shell_r; allow system_r guest_r; allow logadm_r system_r; allow system_r logadm_r; allow system_r nx_server_r; allow system_r staff_r; allow staff_r logadm_r; allow staff_r sysadm_r; allow staff_r unconfined_r; allow staff_r webadm_r; allow unconfined_r system_r; allow system_r unconfined_r; allow system_r user_r; allow webadm_r system_r; allow system_r webadm_r; allow system_r xguest_r;",
"~]# sesearch --allow | wc -l 262798",
"~]# sesearch --dontaudit | wc -l 156712"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-maintaining_selinux_labels-information_gathering_tools
|
Chapter 36. Type Converters
|
Chapter 36. Type Converters Abstract Apache Camel has a built-in type conversion mechanism, which is used to convert message bodies and message headers to different types. This chapter explains how to extend the type conversion mechanism by adding your own custom converter methods. 36.1. Type Converter Architecture Overview This section describes the overall architecture of the type converter mechanism, which you must understand, if you want to write custom type converters. If you only need to use the built-in type converters, see Chapter 34, Understanding Message Formats . Type converter interface Example 36.1, "TypeConverter Interface" shows the definition of the org.apache.camel.TypeConverter interface, which all type converters must implement. Example 36.1. TypeConverter Interface Controller type converter The Apache Camel type converter mechanism follows a controller/worker pattern. There are many worker type converters, which are each capable of performing a limited number of type conversions, and a single controller type converter, which aggregates the type conversions performed by the workers. The controller type converter acts as a front-end for the worker type converters. When you request the controller to perform a type conversion, it selects the appropriate worker and delegates the conversion task to that worker. For users of the type conversion mechanism, the controller type converter is the most important because it provides the entry point for accessing the conversion mechanism. During start up, Apache Camel automatically associates a controller type converter instance with the CamelContext object. To obtain a reference to the controller type converter, you call the CamelContext.getTypeConverter() method. For example, if you have an exchange object, exchange , you can obtain a reference to the controller type converter as shown in Example 36.2, "Getting a Controller Type Converter" . Example 36.2. Getting a Controller Type Converter Type converter loader The controller type converter uses a type converter loader to populate the registry of worker type converters. A type converter loader is any class that implements the TypeConverterLoader interface. Apache Camel currently uses only one kind of type converter loader - the annotation type converter loader (of AnnotationTypeConverterLoader type). Type conversion process Figure 36.1, "Type Conversion Process" gives an overview of the type conversion process, showing the steps involved in converting a given data value, value , to a specified type, toType . Figure 36.1. Type Conversion Process The type conversion mechanism proceeds as follows: The CamelContext object holds a reference to the controller TypeConverter instance. The first step in the conversion process is to retrieve the controller type converter by calling CamelContext.getTypeConverter() . Type conversion is initiated by calling the convertTo() method on the controller type converter. This method instructs the type converter to convert the data object, value , from its original type to the type specified by the toType argument. Because the controller type converter is a front end for many different worker type converters, it looks up the appropriate worker type converter by checking a registry of type mappings The registry of type converters is keyed by a type mapping pair ( toType , fromType ) . If a suitable type converter is found in the registry, the controller type converter calls the worker's convertTo() method and returns the result. If a suitable type converter cannot be found in the registry, the controller type converter loads a new type converter, using the type converter loader. The type converter loader searches the available JAR libraries on the classpath to find a suitable type converter. Currently, the loader strategy that is used is implemented by the annotation type converter loader, which attempts to load a class annotated by the org.apache.camel.Converter annotation. See the section called "Create a TypeConverter file" . If the type converter loader is successful, a new worker type converter is loaded and entered into the type converter registry. This type converter is then used to convert the value argument to the toType type. If the data is successfully converted, the converted data value is returned. If the conversion does not succeed, null is returned. 36.2. Handling Duplicate Type Converters You can configure what must happen if a duplicate type converter is added. In the TypeConverterRegistry (See Section 36.3, "Implementing Type Converter Using Annotations" ) you can set the action to Override , Ignore or Fail using the following code: Override in this code can be replaced by Ignore or Fail , depending on your requirements. TypeConverterExists Class The TypeConverterExists class consists of the following commands: 36.3. Implementing Type Converter Using Annotations Overview The type conversion mechanism can easily be customized by adding a new worker type converter. This section describes how to implement a worker type converter and how to integrate it with Apache Camel, so that it is automatically loaded by the annotation type converter loader. How to implement a type converter To implement a custom type converter, perform the following steps: the section called "Implement an annotated converter class" . the section called "Create a TypeConverter file" . the section called "Package the type converter" . Implement an annotated converter class You can implement a custom type converter class using the @Converter annotation. You must annotate the class itself and each of the static methods intended to perform type conversion. Each converter method takes an argument that defines the from type, optionally takes a second Exchange argument, and has a non-void return value that defines the to type. The type converter loader uses Java reflection to find the annotated methods and integrate them into the type converter mechanism. Example 36.3, "Example of an Annotated Converter Class" shows an example of an annotated converter class that defines a converter method for converting from java.io.File to java.io.InputStream and another converter method (with an Exchange argument) for converting from byte[] to String . Example 36.3. Example of an Annotated Converter Class The toInputStream() method is responsible for performing the conversion from the File type to the InputStream type and the toString() method is responsible for performing the conversion from the byte[] type to the String type. Note The method name is unimportant, and can be anything you choose. What is important are the argument type, the return type, and the presence of the @Converter annotation. Create a TypeConverter file To enable the discovery mechanism (which is implemented by the annotation type converter loader ) for your custom converter, create a TypeConverter file at the following location: The TypeConverter file must contain a comma-separated list of Fully Qualified Names (FQN) of type converter classes. For example, if you want the type converter loader to search the YourPackageName . YourClassName package for annotated converter classes, the TypeConverter file would have the following contents: An alternative method of enabling the discovery mechanism is to add just package names to the TypeConverter file. For example, the TypeConverter file would have the following contents: This would cause the package scanner to scan through the packages for the @Converter tag. Using the FQN method is faster and is the preferred method. Package the type converter The type converter is packaged as a JAR file containing the compiled classes of your custom type converters and the META-INF directory. Put this JAR file on your classpath to make it available to your Apache Camel application. Fallback converter method In addition to defining regular converter methods using the @Converter annotation, you can optionally define a fallback converter method using the @FallbackConverter annotation. The fallback converter method will only be tried, if the controller type converter fails to find a regular converter method in the type registry. The essential difference between a regular converter method and a fallback converter method is that whereas a regular converter is defined to perform conversion between a specific pair of types (for example, from byte[] to String ), a fallback converter can potentially perform conversion between any pair of types. It is up to the code in the body of the fallback converter method to figure out which conversions it is able to perform. At run time, if a conversion cannot be performed by a regular converter, the controller type converter iterates through every available fallback converter until it finds one that can perform the conversion. The method signature of a fallback converter can have either of the following forms: Where MethodName is an arbitrary method name for the fallback converter. For example, the following code extract (taken from the implementation of the File component) shows a fallback converter that can convert the body of a GenericFile object, exploiting the type converters already available in the type converter registry: 36.4. Implementing a Type Converter Directly Overview Generally, the recommended way to implement a type converter is to use an annotated class, as described in the section, Section 36.3, "Implementing Type Converter Using Annotations" . But if you want to have complete control over the registration of your type converter, you can implement a custom worker type converter and add it directly to the type converter registry, as described here. Implement the TypeConverter interface To implement your own type converter class, define a class that implements the TypeConverter interface. For example, the following MyOrderTypeConverter class converts an integer value to a MyOrder object, where the integer value is used to initialize the order ID in the MyOrder object. Add the type converter to the registry You can add the custom type converter directly to the type converter registry using code like the following: Where context is the current org.apache.camel.CamelContext instance. The addTypeConverter() method registers the MyOrderTypeConverter class against the specific type conversion, from String.class to MyOrder.class . You can add custom type converters to your Camel applications without having to use the META-INF file. If you are using Spring or Blueprint , then you can just declare a <bean>. CamelContext discovers the bean automatically and adds the converters. You can declare multiple <bean>s if you have more classes.
|
[
"package org.apache.camel; public interface TypeConverter { <T> T convertTo(Class<T> type, Object value); }",
"org.apache.camel.TypeConverter tc = exchange.getContext().getTypeConverter();",
"typeconverterregistry = camelContext.getTypeConverter() // Define the behaviour if the TypeConverter already exists typeconverterregistry.setTypeConverterExists(TypeConverterExists.Override);",
"package org.apache.camel; import javax.xml.bind.annotation.XmlEnum; /** * What to do if attempting to add a duplicate type converter * * @version */ @XmlEnum public enum TypeConverterExists { Override, Ignore, Fail }",
"package com. YourDomain . YourPackageName ; import org.apache.camel. Converter ; import java.io.*; @Converter public class IOConverter { private IOConverter() { } @Converter public static InputStream toInputStream(File file) throws FileNotFoundException { return new BufferedInputStream(new FileInputStream(file)); } @Converter public static String toString(byte[] data, Exchange exchange) { if (exchange != null) { String charsetName = exchange.getProperty(Exchange.CHARSET_NAME, String.class); if (charsetName != null) { try { return new String(data, charsetName); } catch (UnsupportedEncodingException e) { LOG.warn(\"Can't convert the byte to String with the charset \" + charsetName, e); } } } return new String(data); } }",
"META-INF/services/org/apache/camel/TypeConverter",
"com. PackageName . FooClass",
"com. PackageName",
"// 1. Non-generic form of signature @FallbackConverter public static Object MethodName ( Class type, Exchange exchange, Object value, TypeConverterRegistry registry ) // 2. Templating form of signature @FallbackConverter public static <T> T MethodName ( Class<T> type, Exchange exchange, Object value, TypeConverterRegistry registry )",
"package org.apache.camel.component.file; import org.apache.camel. Converter ; import org.apache.camel. FallbackConverter ; import org.apache.camel.Exchange; import org.apache.camel.TypeConverter; import org.apache.camel.spi.TypeConverterRegistry; @Converter public final class GenericFileConverter { private GenericFileConverter() { // Helper Class } @FallbackConverter public static <T> T convertTo(Class<T> type, Exchange exchange, Object value, TypeConverterRegistry registry) { // use a fallback type converter so we can convert the embedded body if the value is GenericFile if (GenericFile.class.isAssignableFrom(value.getClass())) { GenericFile file = (GenericFile) value; Class from = file.getBody().getClass(); TypeConverter tc = registry.lookup(type, from); if (tc != null) { Object body = file.getBody(); return tc.convertTo(type, exchange, body); } } return null; } }",
"import org.apache.camel.TypeConverter private class MyOrderTypeConverter implements TypeConverter { public <T> T convertTo(Class<T> type, Object value) { // converter from value to the MyOrder bean MyOrder order = new MyOrder(); order.setId(Integer.parseInt(value.toString())); return (T) order; } public <T> T convertTo(Class<T> type, Exchange exchange, Object value) { // this method with the Exchange parameter will be preferd by Camel to invoke // this allows you to fetch information from the exchange during convertions // such as an encoding parameter or the likes return convertTo(type, value); } public <T> T mandatoryConvertTo(Class<T> type, Object value) { return convertTo(type, value); } public <T> T mandatoryConvertTo(Class<T> type, Exchange exchange, Object value) { return convertTo(type, value); } }",
"// Add the custom type converter to the type converter registry context.getTypeConverterRegistry().addTypeConverter(MyOrder.class, String.class, new MyOrderTypeConverter());",
"<bean id=\"myOrderTypeConverters\" class=\"...\"/> <camelContext> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/TypeConv
|
Deploying and Managing AMQ Streams on OpenShift
|
Deploying and Managing AMQ Streams on OpenShift Red Hat Streams for Apache Kafka 2.5 Deploy and manage AMQ Streams 2.5 on OpenShift Container Platform
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/index
|
2.4. Partition Information
|
2.4. Partition Information Figure 2.4. Partition Information Select whether or not to clear the Master Boot Record (MBR). Choose to remove all existing partitions, remove all existing Linux partitions, or preserve existing partitions. To initialize the disk label to the default for the architecture of the system (for example, msdos for x86 and gpt for Itanium), select Initialize the disk label if you are installing on a brand new hard drive. 2.4.1. Creating Partitions To create a partition, click the Add button. The Partition Options window shown in Figure 2.5, "Creating Partitions" appears. Choose the mount point, file system type, and partition size for the new partition. Optionally, you can also choose from the following: In the Additional Size Options section, choose to make the partition a fixed size, up to a chosen size, or fill the remaining space on the hard drive. If you selected swap as the file system type, you can select to have the installation program create the swap partition with the recommended size instead of specifying a size. Force the partition to be created as a primary partition. Create the partition on a specific hard drive. For example, to make the partition on the first IDE hard disk ( /dev/hda ), specify hda as the drive. Do not include /dev in the drive name. Use an existing partition. For example, to make the partition on the first partition on the first IDE hard disk ( /dev/hda1 ), specify hda1 as the partition. Do not include /dev in the partition name. Format the partition as the chosen file system type. Figure 2.5. Creating Partitions To edit an existing partition, select the partition from the list and click the Edit button. The same Partition Options window appears as when you chose to add a partition as shown in Figure 2.5, "Creating Partitions" , except it reflects the values for the selected partition. Modify the partition options and click OK . To delete an existing partition, select the partition from the list and click the Delete button. 2.4.1.1. Creating Software RAID Partitions To create a software RAID partition, use the following steps: Click the RAID button. Select Create a software RAID partition . Configure the partitions as previously described, except select Software RAID as the file system type. Also, you must specify a hard drive on which to make the partition or specify an existing partition to use. Figure 2.6. Creating a Software RAID Partition Repeat these steps to create as many partitions as needed for your RAID setup. All of your partitions do not have to be RAID partitions. After creating all the partitions needed to form a RAID device, follow these steps: Click the RAID button. Select Create a RAID device . Select a mount point, file system type, RAID device name, RAID level, RAID members, number of spares for the software RAID device, and whether to format the RAID device. Figure 2.7. Creating a Software RAID Device Click OK to add the device to the list.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/RHKSTOOL-Partition_Information
|
Chapter 1. Support policy for Eclipse Temurin
|
Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.3/rn-openjdk-temurin-support-policy
|
A.12. VDSM Hook Return Codes
|
A.12. VDSM Hook Return Codes Hook scripts must return one of the return codes shown in Table A.3, "Hook Return Codes" . The return code will determine whether further hook scripts are processed by VDSM. Table A.3. Hook Return Codes Code Description 0 The hook script ended successfully 1 The hook script failed, other hooks should be processed 2 The hook script failed, no further hooks should be processed >2 Reserved
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/vdsm_hooks_return_codes
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service_with_fedramp/proc-providing-feedback-on-redhat-documentation
|
Chapter 20. CICS
|
Chapter 20. CICS Since Camel 4.4-redhat Only producer is supported. This component allows you to interact with the IBM CICS (R) general-purpose transaction processing subsystem. Note Only synchronous mode call are supported. 20.1. Dependencies When using camel-cics with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cics-starter</artifactId> </dependency> You must also declare the ctgclient.jar dependency when working with camel-cics starer. This JAR is provided by IBM and is included in the cics system. 20.2. URI format Where interfaceType is the CICS set of the external API that the camel-cics invokes. At the moment, only ECI (External Call Interface) is supported. This component communicates with the CICS server using two kinds of dataExchangeType . commarea is a block of storage, limited to 32763 bytes, allocated by the program. channel is the new mechanism for exchanging data, analogous to a parameter list. By default, if dataExchangeType is not specified, this component uses commarea : To use the channel and the container you must specify it explicitly in the URI 20.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 20.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 20.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 20.4. Component Options The CICS component supports 17 options, which are listed below. Name Description Default Type ctgDebug Enable debug mode on the underlying IBM CGT client. false java.lang.Boolean eciBinding The Binding instance to transform a Camel Exchange to EciRequest and vice versa com.redhat.camel.component.cics.CICSEciBinding eciTimeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. 0 short encoding The transfer encoding of the message. Cp1145 java.lang.String gatewayFactory The connection factory to be used com.redhat.camel.component.cics.pool.CICSGatewayFactory host The address of the CICS Transaction Gateway that this instance connects to java.lang.String lazyStartProducer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. boolean port The port of the CICS Transaction Gateway that this instance connects to. 2006 int protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String server The address of the CICS server that this instance connects to. java.lang.String sslKeyring The full classname of the SSL key ring class or keystore file to be used for the client encrypted connection. java.lang.String sslPassword The password for the encrypted key ring class or keystore java.lang.String configuration To use a shared CICS configuration com.redhat.camel.component.cics.CICSConfiguration socketConnectionTimeout The socket connection timeout int password Password to use for authentication java.lang.String userId User ID to use for authentication java.lang.String 20.5. Endpoint Options The CICS endpoint is configured using URI syntax: With the following path and query parameters: 20.5.1. Path Parameters (2 parameters) Name Description Default Type interfaceType The interface type, can be eci, esi or epi. at the moment only eci is supported. eci java.lang.String dataExchangeType The kind of data exchange to use Enum value: commarea channel commarea com.redhat.camel.component.cics.support.CICSDataExchangeType 20.5.2. Query Parameters (15 parameters) Name Description Default Type ctgDebug Enable debug mode on the underlying IBM CGT client. false java.lang.Boolean eciBinding The Binding instance to transform a Camel Exchange to EciRequest and vice versa com.redhat.camel.component.cics.CICSEciBinding eciTimeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. 0 short encoding Encoding to convert COMMAREA data to before sending. Cp1145 java.lang.String gatewayFactory The connection factory to use com.redhat.camel.component.cics.pool.CICSGatewayFactory host The address of the CICS Transaction Gateway that this instance connects to localhost java.lang.String port The port of the CICS Transaction Gateway that this instance connects to. 2006 int protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String server The address of the CICS server that this instance connects to java.lang.String lazyStartProducer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. boolean sslKeyring The full class name of the SSL key ring class or keystore file to be used for the client encrypted connection java.lang.String sslPassword The password for the encrypted key ring class or keystore java.lang.String socketConnectionTimeout The socket connection timeout int password Password to use for authentication java.lang.String userId User ID to use for authentication java.lang.String 20.6. Message Headers The CICS component supports 15 message header(s), which is/are listed below: Name Description Default Type CICS_RETURN_CODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_RETURN_CODE_HEADER Return code from this flow operation. int CICS_RETURN_CODE_STRING Constant: com.redhat.camel.component.cics.CICSConstants#CICS_RETURN_CODE_STRING_HEADER The CICS return code as a String. The String is the name of the appropriate Java constant, for example, if this header is ECI_NO_ERROR, then the String returned will be ECI_NO_ERROR. If this header is unknown then the String returned will be ECI_UNKNOWN_CICS_RC. Note For CICS return codes that may have more than one meaning the String returned is a concatenation of the return codes. The only concatenated String is: ECI_ERR_REQUEST_TIMEOUT_OR_ERR_NO_REPLY. java.ang.String CICS_EXTEND_MODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_EXTEND_MODE_HEADER Extend mode of request. The default value is ECI_NO_EXTEND. int CICS_LUW_TOKEN Constant: com.redhat.camel.component.cics.CICSConstants#CICS_LUW_TOKEN_HEADER Extended Logical Unit of Work token. The default value is ECI_LUW_NEW. int CICS_PROGRAM_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_PROGRAM_NAME_HEADER Program to invoke on CICS server. java.lang.String CICS_TRANSACTION_ID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_TRANSACTION_ID_HEADER Transaction ID to run CICS program under. java.lang.String CICS_COMM_AREA_SIZE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_COMM_AREA_SIZE_HEADER Length of COMMAREA. The default value is 0. int CICS_CHANNEL_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_NAME_HEADER The name of the channel to create com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_NAME_HEADER java.lang.String CICS_CONTAINER_NAME Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CONTAINER_NAME_HEADER The name of the container to create. java.lang.String CICS_CHANNEL_CCSID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_CHANNEL_CCSID_HEADER The CCSID the channel should set as its default. int CICS_SERVER Constant: com.redhat.camel.component.cics.CICSConstants#CICS_SERVER_HEADER CICS server to direct request to. This header overrides the value configured in the endpoint. java.lang.String CICS_USER_ID Constant: com.redhat.camel.component.cics.CICSConstants#CICS_USER_ID_HEADER User ID for CICS server. This header overrides the value configured in the endpoint. java.lang.String CICS_PASSWORD Constant: com.redhat.camel.component.cics.CICSConstants#CICS_PASSWORD_HEADER Password or password phrase for CICS server. This header overrides the value configured in the endpoint. java.lang.String CICS_ABEND_CODE Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ABEND_CODE_HEADER CICS transaction abend code. java.lang.String CICS_ECI_REQUEST_TIMEOUT Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ECI_REQUEST_TIMEOUT_HEADER The value, in seconds, of the ECI timeout for the current ECIRequest. A value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway 0 short CICS_ENCODING Constant: com.redhat.camel.component.cics.CICSConstants#CICS_ENCODING_HEADER Encoding to convert COMMAREA data to before sending. String 20.7. Samples 20.7.1. Using Commarea Following sample show how to configure a route that runs a program on a CICS server using COMMAREA. The COMMAREA size has to be defined in CICS_COMM_AREA_SIZE header, while the COMMAREA input data is defined in the Camel Exchange body. Note You must create a COMMAREA that is large enough to contain all the information to be sent to the server and large enough to contain all the information that can be returned from the server. //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_COMM_AREA_SIZE_HEADER; //.... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "ECIREADY"). setHeader(CICS_COMM_AREA_SIZE_HEADER, 18). setBody(constant("My input data")). to("cics:eci/commarea?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); The Outcome of the CICS program invocation is mapped to Camel Exchange in this way: The numeric value of return code is stored in the CICS_RETURN_CODE header The COMMAREA output data is stored in the Camel Exchange Body. 20.7.2. Using Channel with a single input container Following sample shows how to use a channel with a single container to run a CICS program. The channel name and the container name are taken from headers, and the container value from the body: //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CONTAINER_NAME_HEADER; //... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "EC03"). setHeader(CICS_CHANNEL_NAME_HEADER, "SAMPLECHANNEL"). setHeader(CICS_CONTAINER_NAME_HEADER, "INPUTDATA"). setBody(constant("My input data")). to("cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); The container(s) returned is stored in an java.util.Map<String,Object> , the key is the container name and the value is the output data of the container. 20.7.3. Using Channel with multiple input container If you need to run a CICS program that takes multiple container as input, you can create a java.util.Map<String,Object> where the keys are the container names and the values are the input data. In this case the CICS_CONTAINER_NAME header is ignored. //..... import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; //... from("direct:run"). setHeader(CICS_PROGRAM_NAME_HEADER, "EC03"). setHeader(CICS_CHANNEL_NAME_HEADER, "SAMPLECHANNEL"). process(exchange->{ byte[] thirdContainerData = HexFormat.of().parseHex("e04fd020ea3a6910a2d808002b30309d"); Map<String,Object> containers = Map.of( "firstContainerName", "firstContainerData", "secondContainerName", "secondContainerData", "thirdContainerName", thirdContainerData ); exchange.getMessage().setBody(containers); }). to("cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar"); 20.8. Spring Boot Auto-Configuration The component supports 17 options, which are listed below. Name Description Default Type camel.component.cics.binding The Binding instance to transform a Camel Exchange to EciRequest and vice versa. com.redhat.camel.component.cics.CICSEciBinding camel.component.cics.configuration Configuration. com.redhat.camel.component.cics.CICSConfiguration camel.component.cics.ctg-debug Enable debug mode on the underlying IBM CGT client. java.lang.Boolean camel.component.cics.eci-timeout The ECI timeout value associated with this ECIRequest object. An ECI timeout value of zero indicates that this ECIRequest will not be timed out by CICS Transaction Gateway. An ECI timeout value greater than zero indicates that the ECIRequest may be timed out by CICS Transaction Gateway. ECI timeout can expire before a response is received from CICS. This means that the client does not receive the confirmation from CICS that a unit of work has been backed out or committed. java.lang.Short camel.component.cics.enabled Whether to enable auto configuration of the cics component. This is enabled by default. java.lang.Boolean camel.component.cics.encoding The transfer encoding of the message. Cp1145 java.lang.String camel.component.cics.gateway-factory The connection factory to be use. The option is a com.redhat.camel.component.cics.pool.CICSGatewayFactory type. com.redhat.camel.component.cics.pool.CICSGatewayFactory camel.component.cics.host The address of the CICS Transaction Gateway that this instance connects to java.lang.String camel.component.cics.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. java.lang.Boolean camel.component.cics.password Password to use for authentication java.lang.String camel.component.cics.port The port of the CICS Transaction Gateway that this instance connects to. 2006 java.lang.Integer camel.component.cics.protocol the protocol that this component will use to connect to the CICS Transaction Gateway. tcp java.lang.String camel.component.cics.server The address of the CICS server that this instance connects to java.lang.String camel.component.cics.socket-connection-timeout The socket connection timeout java.lang.Integer camel.component.cics.ssl-keyring The full classname of the SSL key ring class or keystore file to be used for the client encrypted connection java.lang.String camel.component.cics.ssl-password The password for the encrypted key ring class or keystore java.lang.String camel.component.cics.user-id User ID to use for authentication java.lang.String
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cics-starter</artifactId> </dependency>",
"<dependency> <artifactId>com.ibm</artifactId> <groupId>ctgclient</groupId> <scope>system</scope> <systemPath>USD{basedir}/lib/ctgclient.jar</systemPath> </dependency>",
"cics://[interfaceType]/[dataExchangeType][?options]",
"cics://eci?host=xxx&port=xxx",
"cics://eci/channel?host=xxx&port=xxx",
"cics://[interfaceType]/[dataExchangeType][?options]",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_COMM_AREA_SIZE_HEADER; //. from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"ECIREADY\"). setHeader(CICS_COMM_AREA_SIZE_HEADER, 18). setBody(constant(\"My input data\")). to(\"cics:eci/commarea?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CONTAINER_NAME_HEADER; // from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"EC03\"). setHeader(CICS_CHANNEL_NAME_HEADER, \"SAMPLECHANNEL\"). setHeader(CICS_CONTAINER_NAME_HEADER, \"INPUTDATA\"). setBody(constant(\"My input data\")). to(\"cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");",
"//.. import static com.redhat.camel.component.cics.CICSConstants.CICS_PROGRAM_NAME_HEADER; import static com.redhat.camel.component.cics.CICSConstants.CICS_CHANNEL_NAME_HEADER; // from(\"direct:run\"). setHeader(CICS_PROGRAM_NAME_HEADER, \"EC03\"). setHeader(CICS_CHANNEL_NAME_HEADER, \"SAMPLECHANNEL\"). process(exchange->{ byte[] thirdContainerData = HexFormat.of().parseHex(\"e04fd020ea3a6910a2d808002b30309d\"); Map<String,Object> containers = Map.of( \"firstContainerName\", \"firstContainerData\", \"secondContainerName\", \"secondContainerData\", \"thirdContainerName\", thirdContainerData ); exchange.getMessage().setBody(containers); }). to(\"cics:eci/channel?host=192.168.0.23&port=2006&protocol=tcp&userId=foo&password=bar\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cics-component-starter
|
Chapter 2. Storage classes
|
Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 2.2.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault 2.2.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads Secrets . Click Create Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 2.2.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 2.2.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encrytpion, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 2.2.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. There are two options available to set the KMS connection details: Select existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select the Key Management Service Provider . If Vault is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . If Thales CipherTrust Manager (using KMIP) is selected as the Key Management Service Provider , follow these steps: Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 2.2.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create .
|
[
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF",
"apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>",
"apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }",
"--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\""
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/storage-classes_rhodf
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions Red Hat Enterprise Linux 9 - Registering the system and managing subscriptions
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_rhea/3.0/html/using_rhea/using_your_subscription
|
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment
|
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.17 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters using the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed, both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.17. MTC 1.8 only supports migrations from OpenShift Container Platform 4.14 and later. Table 7.1. MTC compatibility: Migrating from OpenShift Container Platform 3 to 4 Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.14 or later Stable MTC version MTC v.1.7. z MTC v.1.8. z Installation As described in this guide Install with OLM, release channel release-v1.8 Edge cases exist where network restrictions prevent OpenShift Container Platform 4 clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a OpenShift Container Platform 4 cluster in the cloud, the OpenShift Container Platform 4 cluster might have trouble connecting to the OpenShift Container Platform 3.11 cluster. In this case, it is possible to designate the OpenShift Container Platform 3.11 cluster as the control cluster and push workloads to the remote OpenShift Container Platform 4 cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.17. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.17, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 7.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 7.5.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
|
[
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migrating_from_version_3_to_4/installing-restricted-3-4
|
9.2. Automatic NUMA Balancing
|
9.2. Automatic NUMA Balancing Automatic NUMA balancing improves the performance of applications running on NUMA hardware systems. It is enabled by default on Red Hat Enterprise Linux 7 systems. An application will generally perform best when the threads of its processes are accessing memory on the same NUMA node as the threads are scheduled. Automatic NUMA balancing moves tasks (which can be threads or processes) closer to the memory they are accessing. It also moves application data to memory closer to the tasks that reference it. This is all done automatically by the kernel when automatic NUMA balancing is active. Automatic NUMA balancing uses a number of algorithms and data structures, which are only active and allocated if automatic NUMA balancing is active on the system: Periodic NUMA unmapping of process memory NUMA hinting fault Migrate-on-Fault (MoF) - moves memory to where the program using it runs task_numa_placement - moves running programs closer to their memory 9.2.1. Configuring Automatic NUMA Balancing Automatic NUMA balancing is enabled by default in Red Hat Enterprise Linux 7, and will automatically activate when booted on hardware with NUMA properties. Automatic NUMA balancing is enabled when both of the following conditions are met: # numactl --hardware shows multiple nodes # cat /proc/sys/kernel/numa_balancing shows 1 Manual NUMA tuning of applications will override automatic NUMA balancing, disabling periodic unmapping of memory, NUMA faults, migration, and automatic NUMA placement of those applications. In some cases, system-wide manual NUMA tuning is preferred. To disable automatic NUMA balancing, use the following command: To enable automatic NUMA balancing, use the following command:
|
[
"echo 0 > /proc/sys/kernel/numa_balancing",
"echo 1 > /proc/sys/kernel/numa_balancing"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-auto_numa_balancing
|
Project APIs
|
Project APIs OpenShift Container Platform 4.17 Reference guide for project APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/project_apis/index
|
Chapter 39. Next steps
|
Chapter 39. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Process Automation Manager project
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/next_steps_2
|
Chapter 8. Removing failed or unwanted Ceph Object Storage devices
|
Chapter 8. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 8.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 8.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 8.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated with the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 8.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active.
|
[
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc",
"oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>",
"failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc delete deployment rook-ceph-osd-<osd-id>",
"oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc scale deployment rook-ceph-osd-<osd-id> --replicas=0",
"failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -",
"oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc",
"oc get -n openshift-storage pvc <pvc-name>",
"oc get pv <pv-name-from-above-command> -oyaml | grep path",
"oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted",
"oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>",
"oc delete -n openshift-storage pvc <pvc-name-from-step-a>",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock/",
"oc debug node/<node_with_failed_osd>",
"ls -alh /mnt/local-storage/localblock",
"rm /mnt/local-storage/localblock/<failed-device-name>",
"oc delete pv <pv-name>",
"#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>",
"oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/removing-failed-or-unwanted-ceph-object-storage-devices_rhodf
|
Chapter 17. Uninstalling an IdM client
|
Chapter 17. Uninstalling an IdM client As an administrator, you can remove an Identity Management (IdM) client from the environment. 17.1. Uninstalling an IdM client Uninstalling a client removes the client from the Identity Management (IdM) domain, along with all of the specific IdM configuration of system services, such as System Security Services Daemon (SSSD). This restores the configuration of the client system. Procedure Enter the ipa-client-install --uninstall command: Optional: Check that you cannot obtain a Kerberos ticket-granting ticket (TGT) for an IdM user: If a Kerberos TGT ticket has been returned successfully, follow the additional uninstallation steps in Uninstalling an IdM client: additional steps after multiple past installations . On the client, remove old Kerberos principals from each identified keytab other than /etc/krb5.keytab : On an IdM server, remove all DNS entries for the client host from IdM: On the IdM server, remove the client host entry from the IdM LDAP server. This removes all services and revokes all certificates issued for that host: Important Removing the client host entry from the IdM LDAP server is crucial if you think you might re-enroll the client in the future, with a different IP address or a different hostname. 17.2. Uninstalling an IdM client: additional steps after multiple past installations If you install and uninstall a host as an Identity Management (IdM) client multiple times, the uninstallation procedure might not restore the pre-IdM Kerberos configuration. In this situation, you must manually remove the IdM Kerberos configuration. In extreme cases, you must reinstall the operating system. Prerequisites You have used the ipa-client-install --uninstall command to uninstall the IdM client configuration from the host. However, you can still obtain a Kerberos ticket-granting ticket (TGT) for an IdM user from the IdM server. You have checked that the /var/lib/ipa-client/sysrestore directory is empty and hence you cannot restore the prior-to-IdM-client configuration of the system using the files in the directory. Procedure Check the /etc/krb5.conf.ipa file: If the contents of the /etc/krb5.conf.ipa file are the same as the contents of the krb5.conf file prior to the installation of the IdM client, you can: Remove the /etc/krb5.conf file: Rename the /etc/krb5.conf.ipa file into /etc/krb5.conf : If the contents of the /etc/krb5.conf.ipa file are not the same as the contents of the krb5.conf file prior to the installation of the IdM client, you can at least restore the Kerberos configuration to the state directly after the installation of the operating system: Re-install the krb5-libs package: As a dependency, this command will also re-install the krb5-workstation package and the original version of the /etc/krb5.conf file. Remove the var/log/ipaclient-install.log file if present. Verification Try to obtain IdM user credentials. This should fail: The /etc/krb5.conf file is now restored to its factory state. As a result, you cannot obtain a Kerberos TGT for an IdM user on the host.
|
[
"ipa-client-install --uninstall",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa dnsrecord-del Record name: old-client-name Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa host-del client.idm.example.com",
"rm /etc/krb5.conf",
"mv /etc/krb5.conf.ipa /etc/krb5.conf",
"dnf reinstall krb5-libs",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/uninstalling-an-ipa-client_installing-identity-management
|
10.9. Additional Resources
|
10.9. Additional Resources ip-link(8) man page - Describes the ip utility's network device configuration commands. nmcli(1) man page - Describes NetworkManager 's command‐line tool. nmcli-examples(5) man page - Gives examples of nmcli commands. nm-settings(5) man page - Description of settings and parameters of NetworkManager connections. nm-settings-ifcfg-rh(5) man page - Description of ifcfg-rh settings in the /etc/sysconfig/network-scripts/ifcfg-* files.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configure_802_1q_vlan_tagging-additional_resources
|
Chapter 3. Configuring certificates
|
Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Important To trigger the Ingress Operator to perform a rolling update, you must update the name of the secret. Because the kubelet automatically propagates changes to the secret in the volume mount, updating the secret contents does not trigger a rolling update. For more information, see this Red Hat Knowledgebase Solution . Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. Note In hosted control plane clusters, you cannot replace self-signed certificates from the API. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. Do not include the port number. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.15.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization
|
[
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress",
"oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator",
"oc login -u kubeadmin -p <password> https://FQDN:6443",
"oc config view --flatten > kubeconfig-newapi",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config",
"oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2",
"oc get apiserver cluster -o yaml",
"spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.15.0 True False False 145m",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2",
"oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1",
"oc describe service <service_name>",
"Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837",
"oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true",
"oc get configmap <config_map_name> -o yaml",
"apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----",
"oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true",
"oc get apiservice <api_service_name> -o yaml",
"apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>",
"oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true",
"oc get crd <crd_name> -o yaml",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc describe service <service_name>",
"service.beta.openshift.io/serving-cert-secret-name: <secret>",
"oc delete secret <secret> 1",
"oc get secret <service_name>",
"NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s",
"oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate",
"oc delete secret/signing-key -n openshift-service-ca",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/configuring-certificates
|
Part VI. Multitenancy
|
Part VI. Multitenancy
| null |
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/multitenancy
|
Chapter 22. Automatically discovering bare metal nodes
|
Chapter 22. Automatically discovering bare metal nodes You can use auto-discovery to register overcloud nodes and generate their metadata, without the need to create an instackenv.json file. This improvement can help to reduce the time it takes to collect information about a node. For example, if you use auto-discovery, you do not to collate the IPMI IP addresses and subsequently create the instackenv.json . 22.1. Enabling auto-discovery Enable and configure Bare Metal auto-discovery to automatically discover and import nodes that join your provisioning network when booting with PXE. Procedure Enable Bare Metal auto-discovery in the undercloud.conf file: enable_node_discovery - When enabled, any node that boots the introspection ramdisk using PXE is enrolled in the Bare Metal service (ironic) automatically. discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi . Add your IPMI credentials to ironic: Add your IPMI credentials to a file named ipmi-credentials.json . Replace the SampleUsername , RedactedSecurePassword , and bmc_address values in this example to suit your environment: Import the IPMI credentials file into ironic: 22.2. Testing auto-discovery PXE boot a node that is connected to your provisioning network to test the Bare Metal auto-discovery feature. Procedure Power on the required nodes. Run the openstack baremetal node list command. You should see the new nodes listed in an enrolled state: Set the resource class for each node: Configure the kernel and ramdisk for each node: Set all nodes to available: 22.3. Using rules to discover different vendor hardware If you have a heterogeneous hardware environment, you can use introspection rules to assign credentials and remote management credentials. For example, you might want a separate discovery rule to handle your Dell nodes that use DRAC. Procedure Create a file named dell-drac-rules.json with the following contents: Replace the user name and password values in this example to suit your environment: Import the rule into ironic:
|
[
"enable_node_discovery = True discovery_default_driver = ipmi",
"[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]",
"openstack baremetal introspection rule import ipmi-credentials.json",
"openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | c6e63aec-e5ba-4d63-8d37-bd57628258e8 | None | None | power off | enroll | False | | 0362b7b2-5b9c-4113-92e1-0b34a2535d9b | None | None | power off | enroll | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node set USDNODE --resource-class baremetal ; done",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node manage USDNODE ; done openstack overcloud node configure --all-manageable",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node provide USDNODE ; done",
"[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"ne\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] }, { \"description\": \"Set the vendor driver for Dell hardware\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"eq\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver\", \"value\": \"idrac\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]",
"openstack baremetal introspection rule import dell-drac-rules.json"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_automatically-discovering-bare-metal-nodes
|
17.14. Applying Network Filtering
|
17.14. Applying Network Filtering This section provides an introduction to libvirt's network filters, their goals, concepts and XML format. 17.14.1. Introduction The goal of the network filtering, is to enable administrators of a virtualized system to configure and enforce network traffic filtering rules on virtual machines and manage the parameters of network traffic that virtual machines are allowed to send or receive. The network traffic filtering rules are applied on the host physical machine when a virtual machine is started. Since the filtering rules cannot be circumvented from within the virtual machine, it makes them mandatory from the point of view of a virtual machine user. From the point of view of the guest virtual machine, the network filtering system allows each virtual machine's network traffic filtering rules to be configured individually on a per interface basis. These rules are applied on the host physical machine when the virtual machine is started and can be modified while the virtual machine is running. The latter can be achieved by modifying the XML description of a network filter. Multiple virtual machines can make use of the same generic network filter. When such a filter is modified, the network traffic filtering rules of all running virtual machines that reference this filter are updated. The machines that are not running will update on start. As previously mentioned, applying network traffic filtering rules can be done on individual network interfaces that are configured for certain types of network configurations. Supported network types include: network ethernet -- must be used in bridging mode bridge Example 17.1. An example of network filtering The interface XML is used to reference a top-level filter. In the following example, the interface description references the filter clean-traffic. Network filters are written in XML and may either contain: references to other filters, rules for traffic filtering, or hold a combination of both. The above referenced filter clean-traffic is a filter that only contains references to other filters and no actual filtering rules. Since references to other filters can be used, a tree of filters can be built. The clean-traffic filter can be viewed using the command: # virsh nwfilter-dumpxml clean-traffic . As previously mentioned, a single network filter can be referenced by multiple virtual machines. Since interfaces will typically have individual parameters associated with their respective traffic filtering rules, the rules described in a filter's XML can be generalized using variables. In this case, the variable name is used in the filter XML and the name and value are provided at the place where the filter is referenced. Example 17.2. Description extended In the following example, the interface description has been extended with the parameter IP and a dotted IP address as a value. In this particular example, the clean-traffic network traffic filter will be represented with the IP address parameter 10.0.0.1 and as per the rule dictates that all traffic from this interface will always be using 10.0.0.1 as the source IP address, which is one of the purpose of this particular filter. 17.14.2. Filtering Chains Filtering rules are organized in filter chains. These chains can be thought of as having a tree structure with packet filtering rules as entries in individual chains (branches). Packets start their filter evaluation in the root chain and can then continue their evaluation in other chains, return from those chains back into the root chain or be dropped or accepted by a filtering rule in one of the traversed chains. Libvirt's network filtering system automatically creates individual root chains for every virtual machine's network interface on which the user chooses to activate traffic filtering. The user may write filtering rules that are either directly instantiated in the root chain or may create protocol-specific filtering chains for efficient evaluation of protocol-specific rules. The following chains exist: root mac stp (spanning tree protocol) vlan arp and rarp ipv4 ipv6 Multiple chains evaluating the mac, stp, vlan, arp, rarp, ipv4, or ipv6 protocol can be created using the protocol name only as a prefix in the chain's name. Example 17.3. ARP traffic filtering This example allows chains with names arp-xyz or arp-test to be specified and have their ARP protocol packets evaluated in those chains. The following filter XML shows an example of filtering ARP traffic in the arp chain. The consequence of putting ARP-specific rules in the arp chain, rather than for example in the root chain, is that packets protocols other than ARP do not need to be evaluated by ARP protocol-specific rules. This improves the efficiency of the traffic filtering. However, one must then pay attention to only putting filtering rules for the given protocol into the chain since other rules will not be evaluated. For example, an IPv4 rule will not be evaluated in the ARP chain since IPv4 protocol packets will not traverse the ARP chain. 17.14.3. Filtering Chain Priorities As previously mentioned, when creating a filtering rule, all chains are connected to the root chain. The order in which those chains are accessed is influenced by the priority of the chain. The following table shows the chains that can be assigned a priority and their default priorities. Table 17.1. Filtering chain default priorities values Chain (prefix) Default priority stp -810 mac -800 vlan -750 ipv4 -700 ipv6 -600 arp -500 rarp -400 Note A chain with a lower priority value is accessed before one with a higher value. The chains listed in Table 17.1, "Filtering chain default priorities values" can be also be assigned custom priorities by writing a value in the range [-1000 to 1000] into the priority (XML) attribute in the filter node. Section 17.14.2, "Filtering Chains" filter shows the default priority of -500 for arp chains, for example. 17.14.4. Usage of Variables in Filters There are two variables that have been reserved for usage by the network traffic filtering subsystem: MAC and IP. MAC is designated for the MAC address of the network interface. A filtering rule that references this variable will automatically be replaced with the MAC address of the interface. This works without the user having to explicitly provide the MAC parameter. Even though it is possible to specify the MAC parameter similar to the IP parameter above, it is discouraged since libvirt knows what MAC address an interface will be using. The parameter IP represents the IP address that the operating system inside the virtual machine is expected to use on the given interface. The IP parameter is special in so far as the libvirt daemon will try to determine the IP address (and thus the IP parameter's value) that is being used on an interface if the parameter is not explicitly provided but referenced. For current limitations on IP address detection, consult the section on limitations Section 17.14.12, "Limitations" on how to use this feature and what to expect when using it. The XML file shown in Section 17.14.2, "Filtering Chains" contains the filter no-arp-spoofing , which is an example of using a network filter XML to reference the MAC and IP variables. Note that referenced variables are always prefixed with the character USD . The format of the value of a variable must be of the type expected by the filter attribute identified in the XML. In the above example, the IP parameter must hold a legal IP address in standard format. Failure to provide the correct structure will result in the filter variable not being replaced with a value and will prevent a virtual machine from starting or will prevent an interface from attaching when hot plugging is being used. Some of the types that are expected for each XML attribute are shown in the example Example 17.4, "Sample variable types" . Example 17.4. Sample variable types As variables can contain lists of elements, (the variable IP can contain multiple IP addresses that are valid on a particular interface, for example), the notation for providing multiple elements for the IP variable is: This XML file creates filters to enable multiple IP addresses per interface. Each of the IP addresses will result in a separate filtering rule. Therefore, using the XML above and the following rule, three individual filtering rules (one for each IP address) will be created: As it is possible to access individual elements of a variable holding a list of elements, a filtering rule like the following accesses the 2nd element of the variable DSTPORTS . Example 17.5. Using a variety of variables As it is possible to create filtering rules that represent all of the permissible rules from different lists using the notation USDVARIABLE[@<iterator id="x">] . The following rule allows a virtual machine to receive traffic on a set of ports, which are specified in DSTPORTS , from the set of source IP address specified in SRCIPADDRESSES . The rule generates all combinations of elements of the variable DSTPORTS with those of SRCIPADDRESSES by using two independent iterators to access their elements. Assign concrete values to SRCIPADDRESSES and DSTPORTS as shown: Assigning values to the variables using USDSRCIPADDRESSES[@1] and USDDSTPORTS[@2] would then result in all variants of addresses and ports being created as shown: 10.0.0.1, 80 10.0.0.1, 8080 11.1.2.3, 80 11.1.2.3, 8080 Accessing the same variables using a single iterator, for example by using the notation USDSRCIPADDRESSES[@1] and USDDSTPORTS[@1] , would result in parallel access to both lists and result in the following combination: 10.0.0.1, 80 11.1.2.3, 8080 Note USDVARIABLE is short-hand for USDVARIABLE[@0] . The former notation always assumes the role of iterator with iterator id="0" added as shown in the opening paragraph at the top of this section. 17.14.5. Automatic IP Address Detection and DHCP Snooping This section provides information about automatic IP address detection and DHCP snooping. 17.14.5.1. Introduction The detection of IP addresses used on a virtual machine's interface is automatically activated if the variable IP is referenced but no value has been assigned to it. The variable CTRL_IP_LEARNING can be used to specify the IP address learning method to use. Valid values include: any , dhcp , or none . The value any instructs libvirt to use any packet to determine the address in use by a virtual machine, which is the default setting if the variable CTRL_IP_LEARNING is not set. This method will only detect a single IP address per interface. Once a guest virtual machine's IP address has been detected, its IP network traffic will be locked to that address, if for example, IP address spoofing is prevented by one of its filters. In that case, the user of the VM will not be able to change the IP address on the interface inside the guest virtual machine, which would be considered IP address spoofing. When a guest virtual machine is migrated to another host physical machine or resumed after a suspend operation, the first packet sent by the guest virtual machine will again determine the IP address that the guest virtual machine can use on a particular interface. The value of dhcp instructs libvirt to only honor DHCP server-assigned addresses with valid leases. This method supports the detection and usage of multiple IP address per interface. When a guest virtual machine resumes after a suspend operation, any valid IP address leases are applied to its filters. Otherwise the guest virtual machine is expected to use DHCP to obtain a new IP addresses. When a guest virtual machine migrates to another physical host physical machine, the guest virtual machine is required to re-run the DHCP protocol. If CTRL_IP_LEARNING is set to none , libvirt does not do IP address learning and referencing IP without assigning it an explicit value is an error. 17.14.5.2. DHCP Snooping CTRL_IP_LEARNING= dhcp (DHCP snooping) provides additional anti-spoofing security, especially when combined with a filter allowing only trusted DHCP servers to assign IP addresses. To enable this, set the variable DHCPSERVER to the IP address of a valid DHCP server and provide filters that use this variable to filter incoming DHCP responses. When DHCP snooping is enabled and the DHCP lease expires, the guest virtual machine will no longer be able to use the IP address until it acquires a new, valid lease from a DHCP server. If the guest virtual machine is migrated, it must get a new valid DHCP lease to use an IP address (for example by bringing the VM interface down and up again). Note Automatic DHCP detection listens to the DHCP traffic the guest virtual machine exchanges with the DHCP server of the infrastructure. To avoid denial-of-service attacks on libvirt, the evaluation of those packets is rate-limited, meaning that a guest virtual machine sending an excessive number of DHCP packets per second on an interface will not have all of those packets evaluated and thus filters may not get adapted. Normal DHCP client behavior is assumed to send a low number of DHCP packets per second. Further, it is important to setup appropriate filters on all guest virtual machines in the infrastructure to avoid them being able to send DHCP packets. Therefore, guest virtual machines must either be prevented from sending UDP and TCP traffic from port 67 to port 68 or the DHCPSERVER variable should be used on all guest virtual machines to restrict DHCP server messages to only be allowed to originate from trusted DHCP servers. At the same time anti-spoofing prevention must be enabled on all guest virtual machines in the subnet. Example 17.6. Activating IPs for DHCP snooping The following XML provides an example for the activation of IP address learning using the DHCP snooping method: 17.14.6. Reserved Variables Table 17.2, "Reserved variables" shows the variables that are considered reserved and are used by libvirt: Table 17.2. Reserved variables Variable Name Definition MAC The MAC address of the interface IP The list of IP addresses in use by an interface IPV6 Not currently implemented: the list of IPV6 addresses in use by an interface DHCPSERVER The list of IP addresses of trusted DHCP servers DHCPSERVERV6 Not currently implemented: The list of IPv6 addresses of trusted DHCP servers CTRL_IP_LEARNING The choice of the IP address detection mode 17.14.7. Element and Attribute Overview The root element required for all network filters is named <filter> with two possible attributes. The name attribute provides a unique name of the given filter. The chain attribute is optional but allows certain filters to be better organized for more efficient processing by the firewall subsystem of the underlying host physical machine. Currently, the system only supports the following chains: root , ipv4 , ipv6 , arp and rarp . 17.14.8. References to Other Filters Any filter may hold references to other filters. Individual filters may be referenced multiple times in a filter tree but references between filters must not introduce loops. Example 17.7. An Example of a clean traffic filter The following shows the XML of the clean-traffic network filter referencing several other filters. To reference another filter, the XML node <filterref> needs to be provided inside a filter node. This node must have the attribute filter whose value contains the name of the filter to be referenced. New network filters can be defined at any time and may contain references to network filters that are not known to libvirt, yet. However, once a virtual machine is started or a network interface referencing a filter is to be hot-plugged, all network filters in the filter tree must be available. Otherwise the virtual machine will not start or the network interface cannot be attached. 17.14.9. Filter Rules The following XML shows a simple example of a network traffic filter implementing a rule to drop traffic if the IP address (provided through the value of the variable IP) in an outgoing IP packet is not the expected one, thus preventing IP address spoofing by the VM. Example 17.8. Example of network traffic filtering The traffic filtering rule starts with the rule node. This node may contain up to three of the following attributes: action is mandatory can have the following values: drop (matching the rule silently discards the packet with no further analysis) reject (matching the rule generates an ICMP reject message with no further analysis) accept (matching the rule accepts the packet with no further analysis) return (matching the rule passes this filter, but returns control to the calling filter for further analysis) continue (matching the rule goes on to the rule for further analysis) direction is mandatory can have the following values: in for incoming traffic out for outgoing traffic inout for incoming and outgoing traffic priority is optional. The priority of the rule controls the order in which the rule will be instantiated relative to other rules. Rules with lower values will be instantiated before rules with higher values. Valid values are in the range of -1000 to 1000. If this attribute is not provided, priority 500 will be assigned by default. Note that filtering rules in the root chain are sorted with filters connected to the root chain following their priorities. This allows to interleave filtering rules with access to filter chains. See Section 17.14.3, "Filtering Chain Priorities" for more information. statematch is optional. Possible values are '0' or 'false' to turn the underlying connection state matching off. The default setting is 'true' or 1 For more information, see Section 17.14.11, "Advanced Filter Configuration Topics" . The above example Example 17.7, "An Example of a clean traffic filter" indicates that the traffic of type ip will be associated with the chain ipv4 and the rule will have priority= 500 . If for example another filter is referenced whose traffic of type ip is also associated with the chain ipv4 then that filter's rules will be ordered relative to the priority= 500 of the shown rule. A rule may contain a single rule for filtering of traffic. The above example shows that traffic of type ip is to be filtered. 17.14.10. Supported Protocols The following sections list and give some details about the protocols that are supported by the network filtering subsystem. This type of traffic rule is provided in the rule node as a nested node. Depending on the traffic type a rule is filtering, the attributes are different. The above example showed the single attribute srcipaddr that is valid inside the ip traffic filtering node. The following sections show what attributes are valid and what type of data they are expecting. The following datatypes are available: UINT8 : 8 bit integer; range 0-255 UINT16: 16 bit integer; range 0-65535 MAC_ADDR: MAC address in dotted decimal format, for example 00:11:22:33:44:55 MAC_MASK: MAC address mask in MAC address format, for instance, FF:FF:FF:FC:00:00 IP_ADDR: IP address in dotted decimal format, for example 10.1.2.3 IP_MASK: IP address mask in either dotted decimal format (255.255.248.0) or CIDR mask (0-32) IPV6_ADDR: IPv6 address in numbers format, for example FFFF::1 IPV6_MASK: IPv6 mask in numbers format (FFFF:FFFF:FC00::) or CIDR mask (0-128) STRING: A string BOOLEAN: 'true', 'yes', '1' or 'false', 'no', '0' IPSETFLAGS: The source and destination flags of the ipset described by up to 6 'src' or 'dst' elements selecting features from either the source or destination part of the packet header; example: src,src,dst. The number of 'selectors' to provide here depends on the type of ipset that is referenced Every attribute except for those of type IP_MASK or IPV6_MASK can be negated using the match attribute with value no . Multiple negated attributes may be grouped together. The following XML fragment shows such an example using abstract attributes. Rules behave evaluate the rule as well as look at it logically within the boundaries of the given protocol attributes. Thus, if a single attribute's value does not match the one given in the rule, the whole rule will be skipped during the evaluation process. Therefore, in the above example incoming traffic will only be dropped if: the protocol property attribute1 does not match both value1 and the protocol property attribute2 does not match value2 and the protocol property attribute3 matches value3 . 17.14.10.1. MAC (Ethernet) Protocol ID: mac Rules of this type should go into the root chain. Table 17.3. MAC protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination protocolid UINT16 (0x600-0xffff), STRING Layer 3 protocol ID. Valid strings include [arp, rarp, ipv4, ipv6] comment STRING text string up to 256 characters The filter can be written as such: 17.14.10.2. VLAN (802.1Q) Protocol ID: vlan Rules of this type should go either into the root or vlan chain. Table 17.4. VLAN protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination vlan-id UINT16 (0x0-0xfff, 0 - 4095) VLAN ID encap-protocol UINT16 (0x03c-0xfff), String Encapsulated layer 3 protocol ID, valid strings are arp, ipv4, ipv6 comment STRING text string up to 256 characters 17.14.10.3. STP (Spanning Tree Protocol) Protocol ID: stp Rules of this type should go either into the root or stp chain. Table 17.5. STP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender type UINT8 Bridge Protocol Data Unit (BPDU) type flags UINT8 BPDU flagdstmacmask root-priority UINT16 Root priority range start root-priority-hi UINT16 (0x0-0xfff, 0 - 4095) Root priority range end root-address MAC _ADDRESS root MAC Address root-address-mask MAC _MASK root MAC Address mask roor-cost UINT32 Root path cost (range start) root-cost-hi UINT32 Root path cost range end sender-priority-hi UINT16 Sender priority range end sender-address MAC_ADDRESS BPDU sender MAC address sender-address-mask MAC_MASK BPDU sender MAC address mask port UINT16 Port identifier (range start) port_hi UINT16 Port identifier range end msg-age UINT16 Message age timer (range start) msg-age-hi UINT16 Message age timer range end max-age-hi UINT16 Maximum age time range end hello-time UINT16 Hello time timer (range start) hello-time-hi UINT16 Hello time timer range end forward-delay UINT16 Forward delay (range start) forward-delay-hi UINT16 Forward delay range end comment STRING text string up to 256 characters 17.14.10.4. ARP/RARP Protocol ID: arp or rarp Rules of this type should either go into the root or arp/rarp chain. Table 17.6. ARP and RARP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination hwtype UINT16 Hardware type protocoltype UINT16 Protocol type opcode UINT16, STRING Opcode valid strings are: Request, Reply, Request_Reverse, Reply_Reverse, DRARP_Request, DRARP_Reply, DRARP_Error, InARP_Request, ARP_NAK arpsrcmacaddr MAC_ADDR Source MAC address in ARP/RARP packet arpdstmacaddr MAC _ADDR Destination MAC address in ARP/RARP packet arpsrcipaddr IP_ADDR Source IP address in ARP/RARP packet arpdstipaddr IP_ADDR Destination IP address in ARP/RARP packet gratuitous BOOLEAN Boolean indicating whether to check for a gratuitous ARP packet comment STRING text string up to 256 characters 17.14.10.5. IPv4 Protocol ID: ip Rules of this type should either go into the root or ipv4 chain. Table 17.7. IPv4 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmp, igmp, sctp srcportstart UINT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.6. IPv6 Protocol ID: ipv6 Rules of this type should either go into the root or ipv6 chain. Table 17.8. IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmpv6, sctp scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.7. TCP/UDP/SCTP Protocol ID: tcp, udp, sctp The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.9. TCP/UDP/SCTP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address scripto IP_ADDR Start of range of source IP address srcipfrom IP_ADDR End of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE flags STRING TCP-only: format of mask/flags with mask and flags each being a comma separated list of SYN,ACK,URG,PSH,FIN,RST or NONE or ALL ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.8. ICMP Protocol ID: icmp Note: The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.10. ICMP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UNIT16 ICMP type code UNIT16 ICMP code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.9. IGMP, ESP, AH, UDPLITE, 'ALL' Protocol ID: igmp, esp, ah, udplite, all The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.11. IGMP, ESP, AH, UDPLITE, 'ALL' Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.10. TCP/UDP/SCTP over IPV6 Protocol ID: tcp-ipv6, udp-ipv6, sctp-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.12. TCP, UDP, SCTP over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address srcportstart UINT16 Start of range of valid source ports srcportend UINT16 End of range of valid source ports dstportstart UINT16 Start of range of valid destination ports dstportend UINT16 End of range of valid destination ports comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.11. ICMPv6 Protocol ID: icmpv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.13. ICMPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UINT16 ICMPv6 type code UINT16 ICMPv6 code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.12. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 Protocol ID: igmp-ipv6, esp-ipv6, ah-ipv6, udplite-ipv6, all-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.14. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.11. Advanced Filter Configuration Topics The following sections discuss advanced filter configuration topics. 17.14.11.1. Connection tracking The network filtering subsystem (on Linux) makes use of the connection tracking support of IP tables. This helps in enforcing the direction of the network traffic (state match) as well as counting and limiting the number of simultaneous connections towards a guest virtual machine. As an example, if a guest virtual machine has TCP port 8080 open as a server, clients may connect to the guest virtual machine on port 8080. Connection tracking and enforcement of the direction and then prevents the guest virtual machine from initiating a connection from (TCP client) port 8080 to the host physical machine back to a remote host physical machine. More importantly, tracking helps to prevent remote attackers from establishing a connection back to a guest virtual machine. For example, if the user inside the guest virtual machine established a connection to port 80 on an attacker site, the attacker will not be able to initiate a connection from TCP port 80 back towards the guest virtual machine. By default the connection state match that enables connection tracking and then enforcement of the direction of traffic is turned on. Example 17.9. XML example for turning off connections to the TCP port The following shows an example XML fragment where this feature has been turned off for incoming connections to TCP port 12345. This now allows incoming traffic to TCP port 12345, but would also enable the initiation from (client) TCP port 12345 within the VM, which may or may not be desirable. 17.14.11.2. Limiting number of connections To limit the number of connections a guest virtual machine may establish, a rule must be provided that sets a limit of connections for a given type of traffic. If for example a VM is supposed to be allowed to only ping one other IP address at a time and is supposed to have only one active incoming ssh connection at a time. Example 17.10. XML sample file that sets limits to connections The following XML fragment can be used to limit connections Note Limitation rules must be listed in the XML prior to the rules for accepting traffic. According to the XML file in Example 17.10, "XML sample file that sets limits to connections" , an additional rule for allowing DNS traffic sent to port 22 go out the guest virtual machine, has been added to avoid ssh sessions not getting established for reasons related to DNS lookup failures by the ssh daemon. Leaving this rule out may result in the ssh client hanging unexpectedly as it tries to connect. Additional caution should be used in regards to handling timeouts related to tracking of traffic. An ICMP ping that the user may have terminated inside the guest virtual machine may have a long timeout in the host physical machine's connection tracking system and will therefore not allow another ICMP ping to go through. The best solution is to tune the timeout in the host physical machine's sysfs with the following command:# echo 3 > /proc/sys/net/netfilter/nf_conntrack_icmp_timeout . This command sets the ICMP connection tracking timeout to 3 seconds. The effect of this is that once one ping is terminated, another one can start after 3 seconds. If for any reason the guest virtual machine has not properly closed its TCP connection, the connection to be held open for a longer period of time, especially if the TCP timeout value was set for a large amount of time on the host physical machine. In addition, any idle connection may result in a timeout in the connection tracking system which can be re-activated once packets are exchanged. However, if the limit is set too low, newly initiated connections may force an idle connection into TCP backoff. Therefore, the limit of connections should be set rather high so that fluctuations in new TCP connections do not cause odd traffic behavior in relation to idle connections. 17.14.11.3. Command-line tools virsh has been extended with life-cycle support for network filters. All commands related to the network filtering subsystem start with the prefix nwfilter . The following commands are available: nwfilter-list : lists UUIDs and names of all network filters nwfilter-define : defines a new network filter or updates an existing one (must supply a name) nwfilter-undefine : deletes a specified network filter (must supply a name). Do not delete a network filter currently in use. nwfilter-dumpxml : displays a specified network filter (must supply a name) nwfilter-edit : edits a specified network filter (must supply a name) 17.14.11.4. Pre-existing network filters The following is a list of example network filters that are automatically installed with libvirt: Table 17.15. ICMPv6 protocol types Protocol Name Description allow-arp Accepts all incoming and outgoing Address Resolution Protocol (ARP) traffic to a guest virtual machine. no-arp-spoofing , no-arp-mac-spoofing , and no-arp-ip-spoofing These filters prevent a guest virtual machine from spoofing ARP traffic. In addition, they only allows ARP request and reply messages, and enforce that those packets contain: no-arp-spoofing - the MAC and IP addresses of the guest no-arp-mac-spoofing - the MAC address of the guest no-arp-ip-spoofing - the IP address of the guest low-dhcp Allows a guest virtual machine to request an IP address via DHCP (from any DHCP server). low-dhcp-server Allows a guest virtual machine to request an IP address from a specified DHCP server. The dotted decimal IP address of the DHCP server must be provided in a reference to this filter. The name of the variable must be DHCPSERVER . low-ipv4 Accepts all incoming and outgoing IPv4 traffic to a virtual machine. low-incoming-ipv4 Accepts only incoming IPv4 traffic to a virtual machine. This filter is a part of the clean-traffic filter. no-ip-spoofing Prevents a guest virtual machine from sending IP packets with a source IP address different from the one inside the packet. This filter is a part of the clean-traffic filter. no-ip-multicast Prevents a guest virtual machine from sending IP multicast packets. no-mac-broadcast Prevents outgoing IPv4 traffic to a specified MAC address. This filter is a part of the clean-traffic filter. no-other-l2-traffic Prevents all layer 2 networking traffic except traffic specified by other filters used by the network. This filter is a part of the clean-traffic filter. no-other-rarp-traffic , qemu-announce-self , qemu-announce-self-rarp These filters allow QEMU's self-announce Reverse Address Resolution Protocol (RARP) packets, but prevent all other RARP traffic. All of them are also included in the clean-traffic filter. clean-traffic Prevents MAC, IP and ARP spoofing. This filter references several other filters as building blocks. These filters are only building blocks and require a combination with other filters to provide useful network traffic filtering. The most used one in the above list is the clean-traffic filter. This filter itself can for example be combined with the no-ip-multicast filter to prevent virtual machines from sending IP multicast traffic on top of the prevention of packet spoofing. 17.14.11.5. Writing your own filters Since libvirt only provides a couple of example networking filters, you may consider writing your own. When planning on doing so there are a couple of things you may need to know regarding the network filtering subsystem and how it works internally. Certainly you also have to know and understand the protocols very well that you want to be filtering on so that no further traffic than what you want can pass and that in fact the traffic you want to allow does pass. The network filtering subsystem is currently only available on Linux host physical machines and only works for QEMU and KVM type of virtual machines. On Linux, it builds upon the support for ebtables, iptables and ip6tables and makes use of their features. Considering the list found in Section 17.14.10, "Supported Protocols" the following protocols can be implemented using ebtables: mac stp (spanning tree protocol) vlan (802.1Q) arp, rarp ipv4 ipv6 Any protocol that runs over IPv4 is supported using iptables, those over IPv6 are implemented using ip6tables. Using a Linux host physical machine, all traffic filtering rules created by libvirt's network filtering subsystem first passes through the filtering support implemented by ebtables and only afterwards through iptables or ip6tables filters. If a filter tree has rules with the protocols including: mac, stp, vlan arp, rarp, ipv4, or ipv6; the ebtable rules and values listed will automatically be used first. Multiple chains for the same protocol can be created. The name of the chain must have a prefix of one of the previously enumerated protocols. To create an additional chain for handling of ARP traffic, a chain with name arp-test, can for example be specified. As an example, it is possible to filter on UDP traffic by source and destination ports using the ip protocol filter and specifying attributes for the protocol, source and destination IP addresses and ports of UDP packets that are to be accepted. This allows early filtering of UDP traffic with ebtables. However, once an IP or IPv6 packet, such as a UDP packet, has passed the ebtables layer and there is at least one rule in a filter tree that instantiates iptables or ip6tables rules, a rule to let the UDP packet pass will also be necessary to be provided for those filtering layers. This can be achieved with a rule containing an appropriate udp or udp-ipv6 traffic filtering node. Example 17.11. Creating a custom filter Suppose a filter is needed to fulfill the following list of requirements: prevents a VM's interface from MAC, IP and ARP spoofing opens only TCP ports 22 and 80 of a VM's interface allows the VM to send ping traffic from an interface but not let the VM be pinged on the interface allows the VM to do DNS lookups (UDP towards port 53) The requirement to prevent spoofing is fulfilled by the existing clean-traffic network filter, thus the way to do this is to reference it from a custom filter. To enable traffic for TCP ports 22 and 80, two rules are added to enable this type of traffic. To allow the guest virtual machine to send ping traffic a rule is added for ICMP traffic. For simplicity reasons, general ICMP traffic will be allowed to be initiated from the guest virtual machine, and will not be specified to ICMP echo request and response messages. All other traffic will be prevented to reach or be initiated by the guest virtual machine. To do this a rule will be added that drops all other traffic. Assuming the guest virtual machine is called test and the interface to associate our filter with is called eth0 , a filter is created named test-eth0 . The result of these considerations is the following network filter XML: 17.14.11.6. Sample custom filter Although one of the rules in the above XML contains the IP address of the guest virtual machine as either a source or a destination address, the filtering of the traffic works correctly. The reason is that whereas the rule's evaluation occurs internally on a per-interface basis, the rules are additionally evaluated based on which (tap) interface has sent or will receive the packet, rather than what their source or destination IP address may be. Example 17.12. Sample XML for network interface descriptions An XML fragment for a possible network interface description inside the domain XML of the test guest virtual machine could then look like this: To more strictly control the ICMP traffic and enforce that only ICMP echo requests can be sent from the guest virtual machine and only ICMP echo responses be received by the guest virtual machine, the above ICMP rule can be replaced with the following two rules: Example 17.13. Second example custom filter This example demonstrates how to build a similar filter as in the example above, but extends the list of requirements with an ftp server located inside the guest virtual machine. The requirements for this filter are: prevents a guest virtual machine's interface from MAC, IP, and ARP spoofing opens only TCP ports 22 and 80 in a guest virtual machine's interface allows the guest virtual machine to send ping traffic from an interface but does not allow the guest virtual machine to be pinged on the interface allows the guest virtual machine to do DNS lookups (UDP towards port 53) enables the ftp server (in active mode) so it can run inside the guest virtual machine The additional requirement of allowing an FTP server to be run inside the guest virtual machine maps into the requirement of allowing port 21 to be reachable for FTP control traffic as well as enabling the guest virtual machine to establish an outgoing TCP connection originating from the guest virtual machine's TCP port 20 back to the FTP client (FTP active mode). There are several ways of how this filter can be written and two possible solutions are included in this example. The first solution makes use of the state attribute of the TCP protocol that provides a hook into the connection tracking framework of the Linux host physical machine. For the guest virtual machine-initiated FTP data connection (FTP active mode) the RELATED state is used to enable detection that the guest virtual machine-initiated FTP data connection is a consequence of ( or 'has a relationship with' ) an existing FTP control connection, thereby allowing it to pass packets through the firewall. The RELATED state, however, is only valid for the very first packet of the outgoing TCP connection for the FTP data path. Afterwards, the state is ESTABLISHED, which then applies equally to the incoming and outgoing direction. All this is related to the FTP data traffic originating from TCP port 20 of the guest virtual machine. This then leads to the following solution: Before trying out a filter using the RELATED state, you have to make sure that the appropriate connection tracking module has been loaded into the host physical machine's kernel. Depending on the version of the kernel, you must run either one of the following two commands before the FTP connection with the guest virtual machine is established: modprobe nf_conntrack_ftp - where available OR modprobe ip_conntrack_ftp if above is not available If protocols other than FTP are used in conjunction with the RELATED state, their corresponding module must be loaded. Modules are available for the protocols: ftp, tftp, irc, sip, sctp, and amanda. The second solution makes use of the state flags of connections more than the solution did. This solution takes advantage of the fact that the NEW state of a connection is valid when the very first packet of a traffic flow is detected. Subsequently, if the very first packet of a flow is accepted, the flow becomes a connection and thus enters into the ESTABLISHED state. Therefore, a general rule can be written for allowing packets of ESTABLISHED connections to reach the guest virtual machine or be sent by the guest virtual machine. This is done writing specific rules for the very first packets identified by the NEW state and dictates the ports that the data is acceptable. All packets meant for ports that are not explicitly accepted are dropped, thus not reaching an ESTABLISHED state. Any subsequent packets sent from that port are dropped as well. 17.14.12. Limitations The following is a list of the currently known limitations of the network filtering subsystem. VM migration is only supported if the whole filter tree that is referenced by a guest virtual machine's top level filter is also available on the target host physical machine. The network filter clean-traffic for example should be available on all libvirt installations and thus enable migration of guest virtual machines that reference this filter. To assure version compatibility is not a problem make sure you are using the most current version of libvirt by updating the package regularly. Migration must occur between libvirt insallations of version 0.8.1 or later in order not to lose the network traffic filters associated with an interface. VLAN (802.1Q) packets, if sent by a guest virtual machine, cannot be filtered with rules for protocol IDs arp, rarp, ipv4 and ipv6. They can only be filtered with protocol IDs, MAC and VLAN. Therefore, the example filter clean-traffic Example 17.1, "An example of network filtering" will not work as expected.
|
[
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'/> </interface> </devices>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> </filterref> </interface> </devices>",
"<filter name='no-arp-spoofing' chain='arp' priority='-500'> <uuid>f88f1932-debf-4aa1-9fbe-f10d3aa4bc95</uuid> <rule action='drop' direction='out' priority='300'> <mac match='no' srcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='350'> <arp match='no' arpsrcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='400'> <arp match='no' arpsrcipaddr='USDIP'/> </rule> <rule action='drop' direction='in' priority='450'> <arp opcode='Reply'/> <arp match='no' arpdstmacaddr='USDMAC'/> </rule> <rule action='drop' direction='in' priority='500'> <arp match='no' arpdstipaddr='USDIP'/> </rule> <rule action='accept' direction='inout' priority='600'> <arp opcode='Request'/> </rule> <rule action='accept' direction='inout' priority='650'> <arp opcode='Reply'/> </rule> <rule action='drop' direction='inout' priority='1000'/> </filter>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> <parameter name='IP' value='10.0.0.2'/> <parameter name='IP' value='10.0.0.3'/> </filterref> </interface> </devices>",
"<rule action='accept' direction='in' priority='500'> <tcp srpipaddr='USDIP'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <udp dstportstart='USDDSTPORTS[1]'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <ip srcipaddr='USDSRCIPADDRESSES[@1]' dstportstart='USDDSTPORTS[@2]'/> </rule>",
"SRCIPADDRESSES = [ 10.0.0.1, 11.1.2.3 ] DSTPORTS = [ 80, 8080 ]",
"<interface type='bridge'> <source bridge='virbr0'/> <filterref filter='clean-traffic'> <parameter name='CTRL_IP_LEARNING' value='dhcp'/> </filterref> </interface>",
"<filter name='clean-traffic'> <uuid>6ef53069-ba34-94a0-d33d-17751b9b8cb1</uuid> <filterref filter='no-mac-spoofing'/> <filterref filter='no-ip-spoofing'/> <filterref filter='allow-incoming-ipv4'/> <filterref filter='no-arp-spoofing'/> <filterref filter='no-other-l2-traffic'/> <filterref filter='qemu-announce-self'/> </filter>",
"<filter name='no-ip-spoofing' chain='ipv4'> <uuid>fce8ae33-e69e-83bf-262e-30786c1f8072</uuid> <rule action='drop' direction='out' priority='500'> <ip match='no' srcipaddr='USDIP'/> </rule> </filter>",
"[...] <rule action='drop' direction='in'> <protocol match='no' attribute1='value1' attribute2='value2'/> <protocol attribute3='value3'/> </rule> [...]",
"[...] <mac match='no' srcmacaddr='USDMAC'/> [...]",
"[...] <rule direction='in' action='accept' statematch='false'> <cp dstportstart='12345'/> </rule> [...]",
"[...] <rule action='drop' direction='in' priority='400'> <tcp connlimit-above='1'/> </rule> <rule action='accept' direction='in' priority='500'> <tcp dstportstart='22'/> </rule> <rule action='drop' direction='out' priority='400'> <icmp connlimit-above='1'/> </rule> <rule action='accept' direction='out' priority='500'> <icmp/> </rule> <rule action='accept' direction='out' priority='500'> <udp dstportstart='53'/> </rule> <rule action='drop' direction='inout' priority='1000'> <all/> </rule> [...]",
"<filter name='test-eth0'> <!- - This rule references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP ports 22 (ssh) and 80 (http) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule>> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"[...] <interface type='bridge'> <source bridge='mybridge'/> <filterref filter='test-eth0'/> </interface> [...]",
"<!- - enable outgoing ICMP echo requests- -> <rule action='accept' direction='out'> <icmp type='8'/> </rule>",
"<!- - enable incoming ICMP echo replies- -> <rule action='accept' direction='in'> <icmp type='0'/> </rule>",
"<filter name='test-eth0'> <!- - This filter (eth0) references the clean traffic filter to prevent MAC, IP, and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP port 21 (FTP-control) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='21'/> </rule> <!- - This rule enables TCP port 20 for guest virtual machine-initiated FTP data connection related to an existing FTP control connection - -> <rule action='accept' direction='out'> <tcp srcportstart='20' state='RELATED,ESTABLISHED'/> </rule> <!- - This rule accepts all packets from a client on the FTP data connection - -> <rule action='accept' direction='in'> <tcp dstportstart='20' state='ESTABLISHED'/> </rule> <!- - This rule enables TCP port 22 (SSH) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <!- -This rule enables TCP port 80 (HTTP) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"<filter name='test-eth0'> <!- - This filter references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing and IP address parameter, libvirt will detect the IP address the VM is using. - -> <filterref filter='clean-traffic'/> <!- - This rule allows the packets of all previously accepted connections to reach the guest virtual machine - -> <rule action='accept' direction='in'> <all state='ESTABLISHED'/> </rule> <!- - This rule allows the packets of all previously accepted and related connections be sent from the guest virtual machine - -> <rule action='accept' direction='out'> <all state='ESTABLISHED,RELATED'/> </rule> <!- - This rule enables traffic towards port 21 (FTP) and port 22 (SSH)- -> <rule action='accept' direction='in'> <tcp dstportstart='21' dstportend='22' state='NEW'/> </rule> <!- - This rule enables traffic towards port 80 (HTTP) - -> <rule action='accept' direction='in'> <tcp dstportstart='80' state='NEW'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp state='NEW'/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53' state='NEW'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-applying_network_filtering
|
2.2. Creating and Maintaining Databases
|
2.2. Creating and Maintaining Databases After creating suffixes to organizing the directory data, create databases to contain data of that directory. Note If you used the dsconf utility or the web console to create the suffix, Directory Server created the database automatically. 2.2.1. Creating Databases The directory tree can be distributed over multiple Directory Server databases. There are two ways to distribute data across multiple databases: One database per suffix. The data for each suffix is contained in a separate database. Three databases are added to store the data contained in separate suffixes: This division of the tree units corresponds to three databases, for example: In this example, DB1 contains the data for ou=people and the data for dc=example,dc=com , so that clients can conduct searches based at dc=example,dc=com . However, DB2 only contains the data for ou=groups , and DB3 only contains the data for ou=contractors : Multiple databases for one suffix. Suppose the number of entries in the ou=people branch of the directory tree is so large that two databases are needed to store them. In this case, the data contained by ou=people could be distributed across two databases: DB1 contains people with names from A-K , and DB2 contains people with names from L-Z . DB3 contains the ou=groups data, and DB4 contains the ou=contractors data. A custom plug-in distributes data from a single suffix across multiple databases. Contact Red Hat Consulting for information on how to create distribution logic for Directory Server. 2.2.1.1. Creating a New Database for a Single Suffix Using the Command Line Use the ldapmodify command-line utility to add a new database to the directory configuration file. The database configuration information is stored in the cn=ldbm database,cn=plugins,cn=config entry. To add a new database: Run ldapmodify and create the entry for the new database. The added entry corresponds to a database named UserData that contains the data for the root or sub-suffix ou=people,dc=example,dc=com . Create a root or a sub-suffix, as described in Section 2.1.1.1.1, "Creating a Root Suffix Using the Command Line" and Section 2.1.1.2.1, "Creating a Sub-suffix Using the Command Line" . The database name, given in the DN attribute, must correspond with the value in the nsslapd-backend attribute of the suffix entry. 2.2.1.2. Adding Multiple Databases for a Single Suffix A single suffix can be distributed across multiple databases. However, to distribute the suffix, a custom distribution function has to be created to extend the directory. For more information on creating a custom distribution function, contact Red Hat Consulting. Note Once entries have been distributed, they cannot be redistributed. The following restrictions apply: The distribution function cannot be changed once entry distribution has been deployed. The LDAP modrdn operation cannot be used to rename entries if that would cause them to be distributed into a different database. Distributed local databases cannot be replicated. The ldapmodify operation cannot be used to change entries if that would cause them to be distributed into a different database. Violating these restrictions prevents Directory Server from correctly locating and returning entries. After creating a custom distribution logic plug-in, add it to the directory. The distribution logic is a function declared in a suffix. This function is called for every operation reaching this suffix, including subtree search operations that start above the suffix. A distribution function can be inserted into a suffix using both the web console and the command line interface. To add a custom distribution function to a suffix: Run ldapmodify . Add the following attributes to the suffix entry itself, supplying the information about the custom distribution logic: The nsslapd-backend attribute specifies all databases associated with this suffix. The nsslapd-distribution-plugin attribute specifies the name of the library that the plug-in uses. The nsslapd-distribution-funct attribute provides the name of the distribution function itself. 2.2.2. Maintaining Directory Databases 2.2.2.1. Setting a Database in Read-Only Mode When a database is in read-only mode, you cannot create, modify, or delete any entries. One of the situations when read-only mode is useful is for manually initializing a consumer or before backing up or exporting data from Directory Server. Read-only mode ensures a faithful image of the state of these databases at a given time. The command-line utilities and the web console do not automatically put the directory in read-only mode before export or backup operations because this would make your directory unavailable for updates. However, with multi-supplier replication, this might not be a problem. 2.2.2.1.1. Setting a Database in Read-only Mode Using the Command Line To set a database in read-only mode, use the dsconf backend suffix set command. For example, to set the database of the o=test suffix in read-only mode: Display the suffixes and their corresponding back end: This command displays the name of the back end database to each suffix. You require the suffix's database name in the step. Set the database in read-only mode: 2.2.2.1.2. Setting a Database in Read-only Mode Using the Web Console To set a database in read-only mode: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Select Database Read-Only Mode . Click Save Configuration . 2.2.2.2. Placing the Entire Directory Server in Read-Only Mode If Directory Server maintains more than one database and all databases need to be placed in read-only mode, this can be done in a single operation. Warning This operation also makes Directory Server configuration read-only; therefore, you cannot update the server configuration, enable or disable plug-ins, or even restart Directory Server while it is in read-only mode. Once read-only mode is enabled, it cannot cannot be undone unless you manually modify the configuration files. Note If Directory Server contains replicas, do not use read-only mode because it will disable replication. 2.2.2.2.1. Placing the Entire Directory Server in Read-Only Mode Using the Command Line To enable the read-only mode for Directory Server: Set the nsslapd-readonly parameter to on : Restart the instance: 2.2.2.2.2. Placing the Entire Directory Server in Read-Only Mode Using the Web Console To enable the read-only mode for Directory Server: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Server Settings menu, and select the Server Settings entry. On the Advanced Settings tab, select Server Read-Only . Click Save . 2.2.2.3. Deleting a Database If a suffix is no longer required, you can delete the database that stores the suffix. 2.2.2.3.1. Deleting a Database Using the Command Line To delete a database use the dsconf backend delete command. For example, to delete the database of the o=test suffix: Display the suffixes and their corresponding back end: You require the name of the back end database, which is displayed to the suffix, in the step. Delete the database: 2.2.2.3.2. Deleting a Database Using the Web Console To delete a database using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix to delete, click Suffix Tasks , and select Delete Suffix . Click Yes to confirm. 2.2.2.4. Changing the Transaction Log Directory The transaction log enables Directory Server to recover the database, after an instance shut down unexpectedly. In certain situations, administrators want to change the path to the transaction logs. For example, to store them on a different physical disk than Directory Server database. Note To achieve higher performance, mount a faster disk to the directory that contains the transaction logs, instead of changing the location. For details, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . To change the location of the transaction log directory: Stop Directory Server instance: Create a new location for the transaction logs. For example: Set permissions to enable only Directory Server to access the directory: Remove all __db.* files from the transaction log directory. For example: Move all log.* files from the to the new transaction log directory. For example: If SELinux is running in enforcing mode, set the dirsrv_var_lib_t context on the directory: Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file, and update the nsslapd-db-logdirectory parameter under the cn=config,cn=ldbm database,cn=plugins,cn=config entry. For example: Start the instance:
|
[
"ldapmodify -a -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=UserData,cn=ldbm database,cn=plugins,cn=config changetype: add objectclass: extensibleObject objectclass: nsBackendInstance nsslapd-suffix: ou=people,dc=example,dc=com",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x",
"dn: suffix changetype: modify add: nsslapd-backend nsslapd-backend: Database1 - add: nsslapd-backend nsslapd-backend: Database2 - add: nsslapd-backend nsslapd-backend: Database3 - add: nsslapd-distribution-plugin nsslapd-distribution-plugin: /full/name/of/a/shared/library - add: nsslapd-distribution-funct nsslapd-distribution-funct: distribution-function-name",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test ( test_database )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --enable-readonly \" test_database \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-readonly=on",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test ( test_database )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend delete \" test_database \"",
"dsctl instance_name stop",
"mkdir -p /srv/dirsrv/ instance_name /db/",
"chown dirsrv:dirsrv /srv/dirsrv/ instance_name /db/ chmod 770 /srv/dirsrv/ instance_name /db/",
"rm /var/lib/dirsrv/slapd- instance_name /db/__db.*",
"mv /var/lib/dirsrv/slapd- instance_name /db/log.* /srv/dirsrv/ instance_name /db/",
"semanage fcontext -a -t dirsrv_var_lib_t /srv/dirsrv/ instance_name /db/ restorecon -Rv /srv/dirsrv/ instance_name /db/",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-logdirectory: /srv/dirsrv/ instance_name /db/",
"dsctl instance_name start"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Configuring_Directory_Databases-Creating_and_Maintaining_Databases
|
Chapter 160. KafkaNodePoolSpec schema reference
|
Chapter 160. KafkaNodePoolSpec schema reference Used in: KafkaNodePool Property Property type Description replicas integer The number of pods in the pool. storage EphemeralStorage , PersistentClaimStorage , JbodStorage Storage configuration (disk). Cannot be updated. roles string (one or more of [controller, broker]) array The roles that the nodes in this pool will have when KRaft mode is enabled. Supported values are 'broker' and 'controller'. This field is required. When KRaft mode is disabled, the only allowed value if broker . resources ResourceRequirements CPU and memory resources to reserve. jvmOptions JvmOptions JVM Options for pods. template KafkaNodePoolTemplate Template for pool resources. The template allows users to specify how the resources belonging to this pool are generated.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaNodePoolSpec-reference
|
Chapter 6. Kernel
|
Chapter 6. Kernel Support for Ceph Block Devices The libceph.ko and rbd.ko modules have been added to the Red Hat Enterprise Linux 7.1 kernel. These RBD kernel modules allow a Linux host to see a Ceph block device as a regular disk device entry which can be mounted to a directory and formatted with a standard file system, such as XFS or ext4 . Note that the CephFS module, ceph.ko , is currently not supported in Red Hat Enterprise Linux 7.1. Concurrent Flash MCL Updates Microcode level upgrades (MCL) are enabled in Red Hat Enterprise Linux 7.1 on the IBM System z architecture. These upgrades can be applied without impacting I/O operations to the flash storage media and notify users of the changed flash hardware service level. Dynamic kernel Patching Red Hat Enterprise Linux 7.1 introduces kpatch , a dynamic "kernel patching utility", as a Technology Preview. The kpatch utility allows users to manage a collection of binary kernel patches which can be used to dynamically patch the kernel without rebooting. Note that kpatch is supported to run only on AMD64 and Intel 64 architectures. Crashkernel with More than 1 CPU Red Hat Enterprise Linux 7.1 enables booting crashkernel with more than one CPU. This function is supported as a Technology Preview. dm-era Target Red Hat Enterprise Linux 7.1 introduces the dm-era device-mapper target as a Technology Preview. dm-era keeps track of which blocks were written within a user-defined period of time called an "era". Each era target instance maintains the current era as a monotonically increasing 32-bit counter. This target enables backup software to track which blocks have changed since the last backup. It also enables partial invalidation of the contents of a cache to restore cache coherency after rolling back to a vendor snapshot. The dm-era target is primarily expected to be paired with the dm-cache target. Cisco VIC kernel Driver The Cisco VIC Infiniband kernel driver has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. This driver allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. Enhanced Entropy Management in hwrng The paravirtualized hardware RNG (hwrng) support for Linux guests via virtio-rng has been enhanced in Red Hat Enterprise Linux 7.1. Previously, the rngd daemon needed to be started inside the guest and directed to the guest kernel's entropy pool. Since Red Hat Enterprise Linux 7.1, the manual step has been removed. A new khwrngd thread fetches entropy from the virtio-rng device if the guest entropy falls below a specific level. Making this process transparent helps all Red Hat Enterprise Linux guests in utilizing the improved security benefits of having the paravirtualized hardware RNG provided by KVM hosts. Scheduler Load-Balancing Performance Improvement Previously, the scheduler load-balancing code balanced for all idle CPUs. In Red Hat Enterprise Linux 7.1, idle load balancing on behalf of an idle CPU is done only when the CPU is due for load balancing. This new behavior reduces the load-balancing rate on non-idle CPUs and therefore the amount of unnecessary work done by the scheduler, which improves its performance. Improved newidle Balance in Scheduler The behavior of the scheduler has been modified to stop searching for tasks in the newidle balance code if there are runnable tasks, which leads to better performance. HugeTLB Supports Per-Node 1GB Huge Page Allocation Red Hat Enterprise Linux 7.1 has added support for gigantic page allocation at runtime, which allows the user of 1GB hugetlbfs to specify which Non-Uniform Memory Access (NUMA) Node the 1GB should be allocated on during runtime. New MCS-based Locking Mechanism Red Hat Enterprise Linux 7.1 introduces a new locking mechanism, MCS locks. This new locking mechanism significantly reduces spinlock overhead in large systems, which makes spinlocks generally more efficient in Red Hat Enterprise Linux 7.1. Process Stack Size Increased from 8KB to 16KB Since Red Hat Enterprise Linux 7.1, the kernel process stack size has been increased from 8KB to 16KB to help large processes that use stack space. uprobe and uretprobe Features Enabled in perf and systemtap In Red Hat Enterprise Linux 7.1, the uprobe and uretprobe features work correctly with the perf command and the systemtap script. End-To-End Data Consistency Checking End-To-End data consistency checking on IBM System z is fully supported in Red Hat Enterprise Linux 7.1. This enhances data integrity and more effectively prevents data corruption as well as data loss. DRBG on 32-Bit Systems In Red Hat Enterprise Linux 7.1, the deterministic random bit generator (DRBG) has been updated to work on 32-bit systems. NFSoRDMA Available As a Technology Preview, the NFSoRDMA service has been enabled for Red Hat Enterprise Linux 7.1. This makes the svcrdma module available for users who intend to use Remote Direct Memory Access (RDMA) transport with the Red Hat Enterprise Linux 7 NFS server. Support for Large Crashkernel Sizes The Kdump kernel crash dumping mechanism on systems with large memory, that is up to the Red Hat Enterprise Linux 7.1 maximum memory supported limit of 6TB, has become fully supported in Red Hat Enterprise Linux 7.1. Kdump Supported on Secure Boot Machines With Red Hat Enterprise Linux 7.1, the Kdump crash dumping mechanism is supported on machines with enabled Secure Boot. Firmware-assisted Crash Dumping Red Hat Enterprise Linux 7.1 introduces support for firmware-assisted dump (fadump), which provides an alternative crash dumping tool to kdump. The firmware-assisted feature provides a mechanism to release the reserved dump memory for general use once the crash dump is saved to the disk. This avoids the need to reboot the system after performing the dump, and thus reduces the system downtime. In addition, fadump uses of the kdump infrastructure already present in the user space, and works seamlessly with the existing kdump init scripts. Runtime Instrumentation for IBM System z As a Technology Preview, support for the Runtime Instrumentation feature has been added for Red Hat Enterprise Linux 7.1 on IBM System z. Runtime Instrumentation enables advanced analysis and execution for a number of user-space applications available with the IBM zEnterprise EC12 system. Cisco usNIC Driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. As a Technology Preview, Red Hat Enterprise Linux 7.1 includes the libusnic_verbs driver, which makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API. Intel Ethernet Server Adapter X710/XL710 Driver Update The i40e and i40evf kernel drivers have been updated to their latest upstream versions. These updated drivers are included as a Technology Preview in Red Hat Enterprise Linux 7.1.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-kernel
|
Chapter 2. Using tags to manage cost data
|
Chapter 2. Using tags to manage cost data Learn how tags work in cost management and how you can use them to best organize and view your resources to manage your costs. 2.1. Enabling and creating a tag mapping Tag mapping is when you combine multiple tags across your cloud integrations. Tag mapping enables you to group and filter similar tags with one tag key. To map a tag, you must first enable it. Cost management has a limit of 200 tags that you can enable. Complete the following steps: In cost management, click Settings . Click the header tab, Tags and labels . Click the drop-down menu, Enable tags and labels . Select the tags that you want to enable. Clear a tag to disable it. , click the Map tags and labels drop-down menu and click Create a tag mapping . In the wizard that opens, select the tags that you want to make child tags. Then click . Select one tag that you want to be the parent tag. This action will map the parent tag to the children tags that you selected in the step. Click . Review your selections and click Create tag mapping . 2.1.1. Troubleshooting duplicate keys For every resource, each tag key must be unique and have only one value. However, when you map tags, you can unintentionally create scenarios that violate this rule and create multiple values. Ordinarily, having more than one value would duplicate your costs. However, to avoid duplication, cost management prioritizes one key's value. To understand how cost management prioritizes values and plan accordingly, see Section 1.3, "Understanding value precedence in tags" . 2.1.1.1. Troubleshooting example Consider the following example where you are running an EC2 instance on AWS. You tagged this instance with the following key > value pairs: app > cost-management App > Insights In cost management, you mapped app to App . Therefore, the same EC2 instance has the following key > value pairs: App > cost-management App > Insights In this situation, cost management prioritizes the pre-existing value of the key App > Insights . Cost management also removes the association of the key app and its value cost-management from the AWS resource to prevent duplicate costs. App=cost-management is added to the App=insight cost. Because App is set as the parent key, cost management prioritizes its value over the value of app in the tag mapping. Therefore, you should consider your tag mapping strategy before you set parent tags. To troubleshoot issues with duplicate keys, ensure that your tag keys are unique and have only one value. Alternatively, to learn how to prioritize your keys, see Section 1.3, "Understanding value precedence in tags" .
| null |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/managing_cost_data_using_tagging/assembly-managing-cost-data-tagging
|
Chapter 7. Forwarding telemetry data
|
Chapter 7. Forwarding telemetry data You can use the OpenTelemetry Collector to forward your telemetry data. 7.1. Forwarding traces to a TempoStack instance To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources . Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespaces resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR). Example OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-simplest-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created. 2 The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the gRPC protocol. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4 Additional resources OpenTelemetry Collector documentation Deployment examples on GitHub 7.2. Forwarding logs to a LokiStack instance You can deploy the OpenTelemetry Collector to forward logs to a LokiStack instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Loki Operator is installed. A supported LokiStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging Create a cluster role that grants the Collector's service account the permissions to push logs to the LokiStack application tenant. Example ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: ["loki.grafana.com"] resourceNames: ["logs"] resources: ["application"] verbs: ["create"] - apiGroups: [""] resources: ["pods", "namespaces", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets"] verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: ["replicasets"] verbs: ["get", "list", "watch"] Bind the cluster role to the service account. Example ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging Create an OpenTelemetryCollector custom resource (CR) object. Example OpenTelemetryCollector CR object apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes["level"], ConvertCase(severity_text, "lower")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug] 1 Provides the following resource attributes to be used by the web console: kubernetes.namespace_name , kubernetes.pod_name , kubernetes.container_name , and log_type . 2 Enables the BearerTokenAuth Extension that is required by the OTLP HTTP Exporter. 3 Enables the OTLP HTTP Exporter to export logs from the Collector. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name="telemetrygen" restartPolicy: Never backoffLimit: 4 Additional resources Installing LokiStack log storage
|
[
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-forwarding-telemetry-data
|
Troubleshooting
|
Troubleshooting Red Hat Advanced Cluster Management for Kubernetes 2.12 Troubleshooting
|
[
"adm must-gather --image=registry.redhat.io/rhacm2/acm-must-gather-rhel9:v2.12 --dest-dir=<directory>",
"<your-directory>/cluster-scoped-resources/gather-managed.log>",
"REGISTRY=<internal.repo.address:port> IMAGE1=USDREGISTRY/rhacm2/acm-must-gather-rhel9:v<2.x> adm must-gather --image=USDIMAGE1 --dest-dir=<directory>",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME",
"adm must-gather --image=quay.io/stolostron/backplane-must-gather:SNAPSHOTNAME /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME",
"REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel8@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8 adm must-gather --image=USDIMAGE /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=./data",
"reason: Unschedulable message: '0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.'",
"Error from server: request to convert CR from an invalid group/version: cluster.open-cluster-management.io/v1beta1",
"annotate mce multiclusterengine pause=true",
"patch deployment cluster-manager -n multicluster-engine -p \\ '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"registration-operator\",\"image\":\"registry.redhat.io/multicluster-engine/registration-operator-rhel8@sha256:35999c3a1022d908b6fe30aa9b85878e666392dbbd685e9f3edcb83e3336d19f\"}]}}}}' export ORIGIN_REGISTRATION_IMAGE=USD(oc get clustermanager cluster-manager -o jsonpath='{.spec.registrationImagePullSpec}')",
"patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"registry.redhat.io/multicluster-engine/registration-rhel8@sha256:a3c22aa4326859d75986bf24322068f0aff2103cccc06e1001faaf79b9390515\"}]'",
"annotate crds managedclustersets.cluster.open-cluster-management.io operator.open-cluster-management.io/version- annotate crds managedclustersetbindings.cluster.open-cluster-management.io operator.open-cluster-management.io/version-",
"-n multicluster-engine delete pods -l app=cluster-manager wait crds managedclustersets.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s wait crds managedclustersetbindings.cluster.open-cluster-management.io --for=jsonpath=\"{.metadata.annotations['operator\\.open-cluster-management\\.io/version']}\"=\"2.3.3\" --timeout=120s",
"patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersets.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' -p='[{\"op\":\"replace\", \"path\":\"/spec/resource/version\", \"value\":\"v1beta1\"}]' patch StorageVersionMigration managedclustersetbindings.cluster.open-cluster-management.io --type='json' --subresource status -p='[{\"op\":\"remove\", \"path\":\"/status/conditions\"}]'",
"wait storageversionmigration managedclustersets.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s wait storageversionmigration managedclustersetbindings.cluster.open-cluster-management.io --for=condition=Succeeded --timeout=120s",
"annotate mce multiclusterengine pause- patch clustermanager cluster-manager --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/registrationImagePullSpec\", \"value\": \"'USDORIGIN_REGISTRATION_IMAGE'\"}]'",
"get managedclusterset get managedclustersetbinding -A",
"-n multicluster-engine get pods -l app=managedcluster-import-controller-v2",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1",
"-n <managed_cluster_name> get secrets <managed_cluster_name>-import",
"-n multicluster-engine logs -l app=managedcluster-import-controller-v2 --tail=-1 | grep importconfig-controller",
"get managedcluster <managed_cluster_name> -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}' | grep ManagedClusterImportSucceeded",
"get pod -n open-cluster-management-agent | grep klusterlet-registration-agent",
"logs <registration_agent_pod> -n open-cluster-management-agent",
"get infrastructure cluster -o yaml | grep apiServerURL",
"error log: Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition Error from server (AlreadyExists): error when creating \"STDIN\": customresourcedefinitions.apiextensions.k8s.io \"klusterlets.operator.open-cluster-management.io\" already exists The cluster cannot be imported because its Klusterlet CRD already exists. Either the cluster was already imported, or it was not detached completely during a previous detach process. Detach the existing cluster before trying the import again.\"",
"get all -n open-cluster-management-agent get all -n open-cluster-management-agent-addon",
"get klusterlet | grep klusterlet | awk '{print USD1}' | xargs oc patch klusterlet --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"delete namespaces open-cluster-management-agent open-cluster-management-agent-addon --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc delete crds --wait=false get crds | grep open-cluster-management.io | awk '{print USD1}' | xargs oc patch crds --type=merge -p '{\"metadata\":{\"finalizers\": []}}'",
"time=\"2020-08-07T15:27:55Z\" level=error msg=\"Error: error setting up new vSphere SOAP client: Post https://147.1.1.1/sdk: x509: cannot validate certificate for xx.xx.xx.xx because it doesn't contain any IP SANs\" time=\"2020-08-07T15:27:55Z\" level=error",
"Error: error setting up new vSphere SOAP client: Post https://vspherehost.com/sdk: x509: certificate signed by unknown authority\"",
"x509: certificate has expired or is not yet valid",
"time=\"2020-08-07T19:41:58Z\" level=debug msg=\"vsphere_tag_category.category: Creating...\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\"Error: could not create category: POST https://vspherehost.com/rest/com/vmware/cis/tagging/category: 403 Forbidden\" time=\"2020-08-07T19:41:58Z\" level=error time=\"2020-08-07T19:41:58Z\" level=error msg=\" on ../tmp/openshift-install-436877649/main.tf line 54, in resource \\\"vsphere_tag_category\\\" \\\"category\\\":\" time=\"2020-08-07T19:41:58Z\" level=error msg=\" 54: resource \\\"vsphere_tag_category\\\" \\\"category\\\" {\"",
"failed to fetch Master Machines: failed to load asset \\\\\\\"Install Config\\\\\\\": invalid \\\\\\\"install-config.yaml\\\\\\\" file: platform.vsphere.dnsVIP: Invalid value: \\\\\\\"\\\\\\\": \\\\\\\"\\\\\\\" is not a valid IP",
"time=\"2020-08-11T14:31:38-04:00\" level=debug msg=\"vsphereprivate_import_ova.import: Creating...\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error msg=\"Error: rpc error: code = Unavailable desc = transport is closing\" time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=error time=\"2020-08-11T14:31:39-04:00\" level=fatal msg=\"failed to fetch Cluster: failed to generate asset \\\"Cluster\\\": failed to create cluster: failed to apply Terraform: failed to complete the change\"",
"ERROR ERROR Error: error reconfiguring virtual machine: error processing disk changes post-clone: disk.0: ServerFaultCode: NoPermission: RESOURCE (vm-71:2000), ACTION (queryAssociatedProfile): RESOURCE (vm-71), ACTION (PolicyIDByVirtualDisk)",
"clouds: openstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt\"",
"spec: baseDomain: dev09.red-chesterfield.com clusterName: txue-osspoke platform: openstack: cloud: openstack credentialsSecretRef: name: txue-osspoke-openstack-creds certificatesSecretRef: name: txue-osspoke-openstack-certificatebundle",
"create secret generic txue-osspoke-openstack-certificatebundle --from-file=ca.crt=ca.crt.pem -n USDCLUSTERNAME",
"E0917 03:04:05.874759 1 manifestwork_controller.go:179] Reconcile work test-1-klusterlet-addon-workmgr fails with err: Failed to update work status with err Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:05.874887 1 base_controller.go:231] \"ManifestWorkAgent\" controller failed to sync \"test-1-klusterlet-addon-workmgr\", err: Failed to update work status with err Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks/test-1-klusterlet-addon-workmgr\": x509: certificate signed by unknown authority E0917 03:04:37.245859 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManifestWork: failed to list *v1.ManifestWork: Get \"api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/namespaces/test-1/manifestworks?resourceVersion=607424\": x509: certificate signed by unknown authority",
"I0917 02:27:41.525026 1 event.go:282] Event(v1.ObjectReference{Kind:\"Namespace\", Namespace:\"open-cluster-management-agent\", Name:\"open-cluster-management-agent\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'ManagedClusterAvailableConditionUpdated' update managed cluster \"test-1\" available condition to \"True\", due to \"Managed cluster is available\" E0917 02:58:26.315984 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1beta1.CertificateSigningRequest: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority E0917 02:58:26.598343 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\": x509: certificate signed by unknown authority E0917 02:58:27.613963 1 reflector.go:127] k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.ManagedCluster: failed to list *v1.ManagedCluster: Get \"https://api.aaa-ocp.dev02.location.com:6443/apis/cluster.management.io/v1/managedclusters?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtest-1&resourceVersion=607408&timeout=9m33s&timeoutSeconds=573&watch=true\"\": x509: certificate signed by unknown authority",
"delete secret -n <cluster_name> <cluster_name>-import",
"delete secret -n <cluster_name> <cluster_name>-import",
"get secret -n <cluster_name> <cluster_name>-import -ojsonpath='{.data.import\\.yaml}' | base64 --decode > import.yaml",
"apply -f import.yaml",
"api-resources --verbs=list --namespaced -o name | grep -E '^secrets|^serviceaccounts|^managedclusteraddons|^roles|^rolebindings|^manifestworks|^leases|^managedclusterinfo|^appliedmanifestworks'|^clusteroauths' | xargs -n 1 oc get --show-kind --ignore-not-found -n <cluster_name>",
"edit <resource_kind> <resource_name> -n <namespace>",
"delete ns <cluster-name>",
"delete secret auto-import-secret -n <cluster-namespace>",
"apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: cinder.csi.openstack.org kind: VolumeSnapshotClass metadata: annotations: snapshot.storage.kubernetes.io/is-default-class: 'true' name: standard-csi parameters: force-create: 'true'",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME",
"adm must-gather --image=quay.io/stolostron/must-gather:SNAPSHOTNAME --dest-dir=<SOMENAME> ; tar -cvzf <SOMENAME>.tgz <SOMENAME>",
"There are two ways to access the provisioned PostgreSQL database.",
"exec -it multicluster-global-hub-postgres-0 -c multicluster-global-hub-postgres -n multicluster-global-hub -- psql -U postgres -d hoh Or access the database installed by crunchy operator exec -it USD(kubectl get pods -n multicluster-global-hub -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -c database -n multicluster-global-hub -- psql -U postgres -d hoh -c \"SELECT 1\"",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Service metadata: name: multicluster-global-hub-postgres-lb namespace: multicluster-global-hub spec: ports: - name: postgres port: 5432 protocol: TCP targetPort: 5432 selector: name: multicluster-global-hub-postgres type: LoadBalancer EOF",
"Host get svc postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}'",
"patch postgrescluster postgres -n multicluster-global-hub -p '{\"spec\":{\"service\":{\"type\":\"LoadBalancer\"}}}' --type merge",
"Host get svc -n multicluster-global-hub postgres-ha -ojsonpath='{.status.loadBalancer.ingress[0].hostname}' Username get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"user\" | base64decode}}' Password get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"password\" | base64decode}}' Database get secrets -n multicluster-global-hub postgres-pguser-postgres -o go-template='{{index (.data) \"dbname\" | base64decode}}'",
"pg_dump hoh > hoh.sql",
"pg_dump -h my.host.com -p 5432 -U postgres -F t hoh -f hoh-USD(date +%d-%m-%y_%H-%M).tar",
"psql -h another.host.com -p 5432 -U postgres -d hoh < hoh.sql",
"pg_restore -h another.host.com -p 5432 -U postgres -d hoh hoh-USD(date +%d-%m-%y_%H-%M).tar",
"edit managedcluster <cluster-name>",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: <cluster-name> spec: hubAcceptsClient: true leaseDurationSeconds: 60",
"get pod -n <new_cluster_name>",
"logs <new_cluster_name_provision_pod_name> -n <new_cluster_name> -c hive",
"describe clusterdeployments -n <new_cluster_name>",
"No subnets provided for zones",
"get secret grafana-config -n open-cluster-management-observability -o jsonpath=\"{.data.grafana\\.ini}\" | base64 -d | grep dataproxy -A 4",
"[dataproxy] timeout = 300 dial_timeout = 30 keep_alive_seconds = 300",
"get secret/grafana-datasources -n open-cluster-management-observability -o jsonpath=\"{.data.datasources\\.yaml}\" | base64 -d | grep queryTimeout",
"queryTimeout: 300s",
"annotate route grafana -n open-cluster-management-observability --overwrite haproxy.router.openshift.io/timeout=300s",
"% oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true True True 56d cluster1 true True True 16h",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-ready-clusters namespace: default spec: clusterSelector: {} status: decisions: - clusterName: cluster1 clusterNamespace: cluster1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: managedcluster-admin-user-zisis namespace: local-cluster rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: managedcluster-admin-user-zisis namespace: local-cluster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: managedcluster-admin-user-zisis namespace: local-cluster subjects: - kind: User name: zisis apiGroup: rbac.authorization.k8s.io",
"failed to install release: unable to build kubernetes objects from release manifest: unable to recognize \"\": no matches for kind \"Deployment\" in version \"extensions/v1beta1\"",
"error: unable to recognize \"old.yaml\": no matches for kind \"Deployment\" in version \"deployment/v1beta1\"",
"apiVersion: apps/v1 kind: Deployment",
"explain <resource>",
"get klusterlets klusterlet -oyaml",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: deva namespace: ch-obj labels: name: obj-sub spec: type: ObjectBucket pathname: http://ec2-100-26-232-156.compute-1.amazonaws.com:9000/deva sourceNamespaces: - default secretRef: name: dev --- apiVersion: v1 kind: Secret metadata: name: dev namespace: ch-obj labels: name: obj-sub data: AccessKeyID: YWRtaW4= SecretAccessKey: cGFzc3dvcmRhZG1pbg==",
"annotate appsub -n <subscription-namespace> <subscription-name> test=true",
"get pods -n open-cluster-management|grep observability",
"get crd|grep observ",
"multiclusterobservabilities.observability.open-cluster-management.io observabilityaddons.observability.open-cluster-management.io observatoria.core.observatorium.io",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"error: response status code is 500 Internal Server Error, response body is x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"observability-client-ca-certificate\")",
"delete secret observability-controller-open-cluster-management.io-observability-signer-client-cert -n open-cluster-management-addon-observability",
"project open-cluster-management",
"patch search -n open-cluster-management search-v2-operator --type json -p '[{\"op\": \"add\", \"path\": \"/spec/deployments/database/resources\", \"value\": {\"limits\": {\"memory\": \"16Gi\"}, \"requests\": {\"memory\": \"32Mi\", \"cpu\": \"25m\"}}}]'",
"annotate search search-v2-operator search-pause=true",
"edit cm search-postgres -n open-cluster-management",
"postgresql.conf: |- work_mem = '128MB' # Higher values allocate more memory max_parallel_workers_per_gather = '0' # Disables parallel queries shared_buffers = '1GB' # Higher values allocate more memory",
"delete pod search-postgres-xyz search-api-xzy",
"get cm search-postgres -n open-cluster-management -o yaml",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@5827190780573537664: compact blocks [ /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE]: 2 errors: populate block: add series: write series data: write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device; write /var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE.tmp-for-creation/index: no space left on device\"",
"delete pod observability-thanos-compact-0 -n open-cluster-management-observability",
"ts=2024-01-24T15:34:51.948653839Z caller=compact.go:491 level=error msg=\"critical error detected; halting\" err=\"compaction: group 0@15699422364132557315: compact blocks [/var/thanos/compact/compact/0@15699422364132557315/01HKZGQGJCKQWF3XMA8EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZQK7TD06J2XWGR5EXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HKZYEZ2DVDQXF1STVEXAMPLE /var/thanos/compact/compact/0@15699422364132557315/01HM05APAHXBQSNC0N5EXAMPLE]: populate block: chunk iter: cannot populate chunk 8 from block 01HKZYEZ2DVDQXF1STVEXAMPLE: segment index 0 out of range\"",
"rsh observability-thanos-compact-0 [..] thanos tools bucket verify -r --objstore.config=\"USDOBJSTORE_CONFIG\" --objstore-backup.config=\"USDOBJSTORE_CONFIG\" --id=01HKZYEZ2DVDQXF1STVEXAMPLE",
"thanos tools bucket mark --id \"01HKZYEZ2DVDQXF1STVEXAMPLE\" --objstore.config=\"USDOBJSTORE_CONFIG\" --marker=deletion-mark.json --details=DELETE",
"thanos tools bucket cleanup --objstore.config=\"USDOBJSTORE_CONFIG\"",
"subctl verify --verbose --only connectivity --context <from_context> --tocontext <to_context> --image-override submariner-nettest=quay.io/submariner/nettest:devel --packet-size 200",
"annotate node <node_name> submariner.io/tcp-clamp-mss=1200",
"delete pod -n submariner-operator -l app=submariner-routeagent",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: disable-offload namespace: submariner-operator spec: selector: matchLabels: app: disable-offload template: metadata: labels: app: disable-offload spec: tolerations: - operator: Exists containers: - name: disable-offload image: nicolaka/netshoot imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: true capabilities: add: - net_admin drop: - all privileged: true readOnlyRootFilesystem: false runAsNonRoot: false command: [\"/bin/sh\", \"-c\"] args: - ethtool --offload vxlan-tunnel rx off tx off; ethtool --offload vx-submariner rx off tx off; sleep infinity restartPolicy: Always securityContext: {} serviceAccount: submariner-routeagent serviceAccountName: submariner-routeagent hostNetwork: true",
"message: >- [spec.tls.caCertificate: Invalid value: \"redacted ca certificate data\": failed to parse CA certificate: data does not contain any valid RSA or ECDSA certificates, spec.tls.certificate: Invalid value: \"redacted certificate data\": data does not contain any valid RSA or ECDSA certificates, spec.tls.key: Invalid value: \"\": no key specified]",
"tls: certificate: | {{ print \"{{hub fromSecret \"open-cluster-management\" \"minio-cert\" \"tls.crt\" hub}}\" | base64dec | autoindent }}"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/troubleshooting/index
|
Data Grid Server Guide
|
Data Grid Server Guide Red Hat Data Grid 8.4 Deploy, secure, and manage Data Grid Server deployments Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/index
|
Chapter 4. Creating images
|
Chapter 4. Creating images Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the OpenShift image registry. 4.1. Learning container best practices When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform. 4.1.1. General container image guidelines The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform. Reuse images Wherever possible, base your image on an appropriate upstream image using the FROM statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly. In addition, use tags in the FROM instruction, for example, rhel:rhel7 , to make it clear to users exactly which version of an image your image is based on. Using a tag other than latest ensures your image is not subjected to breaking changes that might go into the latest version of an upstream image. Maintain compatibility within tags When tagging your own images, try to maintain backwards compatibility within a tag. For example, if you provide an image named foo and it currently includes version 1.0 , you might provide a tag of foo:v1 . When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image foo:v1 , and downstream consumers of this tag are able to get updates without being broken. If you later release an incompatible update, then switch to a new tag, for example foo:v2 . This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using foo:latest takes on the risk of any incompatible changes being introduced. Avoid multiple processes Do not start multiple services, such as a database and SSHD , inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Container Platform allows you to easily colocate and co-manage related images by grouping them into a single pod. This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes. Use exec in wrapper scripts Many images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script uses exec so that the script's process is replaced by your software. If you do not use exec , then signals sent by your container runtime go to your wrapper script instead of your software's process. This is not what you want. If you have a wrapper script that starts a process for some server. You start your container, for example, using podman run -i , which runs the wrapper script, which in turn starts your process. If you want to close your container with CTRL+C . If your wrapper script used exec to start the server process, podman sends SIGINT to the server process, and everything works as you expect. If you did not use exec in your wrapper script, podman sends SIGINT to the process for the wrapper script and your process keeps running like nothing happened. Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, canceling any child processes you launched from your PID 1 process. Clean temporary files Remove all temporary files you create during the build process. This also includes any files added with the ADD command. For example, run the yum clean command after performing yum install operations. You can prevent the yum cache from ending up in an image layer by creating your RUN statement as follows: RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y Note that if you instead write: RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y Then the first yum invocation leaves extra files in that layer, and these files cannot be removed when the yum clean operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers. The current container build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an rm command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the yum clean example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer. In addition, performing multiple commands in a single RUN statement reduces the number of layers in your image, which improves download and extraction time. Place instructions in the proper order The container builder reads the Dockerfile and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the time this or another image is built. It is very important to place instructions that rarely change at the top of your Dockerfile . Doing so ensures the builds of the same image are very fast because the cache is not invalidated by upper layer changes. For example, if you are working on a Dockerfile that contains an ADD command to install a file you are iterating on, and a RUN command to yum install a package, it is best to put the ADD command last: FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile This way each time you edit myfile and rerun podman build or docker build , the system reuses the cached layer for the yum command and only generates the new layer for the ADD operation. If instead you wrote the Dockerfile as: FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y Then each time you changed myfile and reran podman build or docker build , the ADD operation would invalidate the RUN layer cache, so the yum operation must be rerun as well. Mark important ports The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run invocation, using the EXPOSE instruction in a Dockerfile makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run: Exposed ports show up under podman ps associated with containers created from your image. Exposed ports are present in the metadata for your image returned by podman inspect . Exposed ports are linked when you link one container to another. Set environment variables It is good practice to set environment variables with the ENV instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the Dockerfile . Another example is advertising a path on the system that could be used by another process, such as JAVA_HOME . Avoid default passwords Avoid setting default passwords. Many people extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords are configurable using an environment variable instead. If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set. Avoid sshd It is best to avoid running sshd in your image. You can use the podman exec or docker exec command to access containers that are running on the local host. Alternatively, you can use the oc exec command or the oc rsh command to access containers that are running on the OpenShift Container Platform cluster. Installing and running sshd in your image opens up additional vectors for attack and requirements for security patching. Use volumes for persistent data Images use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content could not be preserved. All data that needs to be preserved even after the container is destroyed must be written to a volume. Container engines support a readonly flag for containers, which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now makes it easier to take advantage of it later. Explicitly defining volumes in your Dockerfile makes it easy for consumers of the image to understand what volumes they must define when running your image. See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform. Note Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. 4.1.2. OpenShift Container Platform-specific guidelines The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform. 4.1.2.1. Enable images for source-to-image (S2I) For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (S2I) build tool. S2I is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. 4.1.2.2. Support arbitrary user ids By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node. For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions. Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image: RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory Because the container user is always a member of the root group, the container user can read and write these files. Warning Care must be taken when altering the directories and file permissions of sensitive areas of a container, which is no different than to a normal system. If applied to sensitive areas, such as /etc/passwd , this can allow the modification of such files by unintended users potentially exposing the container or host. CRI-O supports the insertion of arbitrary user IDs into the container's /etc/passwd , so changing permissions is never required. In addition, the processes running in the container must not listen on privileged ports, ports below 1024, since they are not running as a privileged user. Important If your S2I image does not include a USER declaration with a numeric user, your builds fail by default. To allow images that use either named users or the root 0 user to build in OpenShift Container Platform, you can add the project's builder service account, system:serviceaccount:<your-project>:builder , to the anyuid security context constraint (SCC). Alternatively, you can allow all images to run as any user. 4.1.2.3. Use services for inter-image communication For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image consumes an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. 4.1.2.4. Provide common libraries For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met. 4.1.2.5. Use environment variables for configuration Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file. It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry. Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image. For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present. This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources are defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image. In addition, tuning is done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error. 4.1.2.6. Set image metadata Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that are needed. 4.1.2.7. Clustering You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information to perform leader election or failover state; for example, in session replication. Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. 4.1.2.8. Logging It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages. If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file. 4.1.2.9. Liveness and readiness probes Document example liveness and readiness probes that can be used with your image. These probes allow users to deploy your image with confidence that traffic is not be routed to the container until it is prepared to handle it, and that the container is restarted if the process gets into an unhealthy state. 4.1.2.10. Templates Consider providing an example template with your image. A template gives users an easy way to quickly get your image deployed with a working configuration. Your template must include the liveness and readiness probes you documented with the image, for completeness. 4.2. Including metadata in images Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future. 4.2.1. Defining image metadata You can use the LABEL instruction in a Dockerfile to define image metadata. Labels are similar to environment variables in that they are key value pairs attached to an image or a container. Labels are different from environment variable in that they are not visible to the running application and they can also be used for fast look-up of images and containers. Docker documentation for more information on the LABEL instruction. The label names are typically namespaced. The namespace is set accordingly to reflect the project that is going to pick up the labels and use them. For OpenShift Container Platform the namespace is set to io.openshift and for Kubernetes the namespace is io.k8s . See the Docker custom metadata documentation for details about the format. Table 4.1. Supported Metadata Variable Description io.openshift.tags This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process. io.openshift.wants Specifies a list of tags that the generation tools and the UI uses to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants mysql and redis and you do not have the container image with redis tag, then UI can suggest you to add this image into your deployment. io.k8s.description This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users. io.openshift.non-scalable An image can use this variable to suggest that it does not support scaling. The UI then communicates this to consumers of that image. Being not-scalable means that the value of replicas should initially not be set higher than 1 . io.openshift.min-memory and io.openshift.min-cpu This label suggests how much resources the container image needs to work properly. The UI can warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity. 4.3. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 4.3.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 4.3.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 4.2. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 4.4. About testing source-to-image images As an Source-to-Image (S2I) builder image author, you can test your S2I image locally and use the OpenShift Container Platform build system for automated testing and continuous integration. S2I requires the assemble and run scripts to be present to successfully run the S2I build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the S2I. The goal of testing an S2I image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated. 4.4.1. Understanding testing requirements The standard location for the test script is test/run . This script is invoked by the OpenShift Container Platform S2I image builder and it could be a simple Bash script or a static Go binary. The test/run script performs the S2I build, so you must have the S2I binary available in your USDPATH . If required, follow the installation instructions in the S2I README . S2I combines the application source code and builder image, so to test it you need a sample application source to verify that the source successfully transforms into a runnable container image. The sample application should be simple, but it should exercise the crucial steps of assemble and run scripts. 4.4.2. Generating scripts and tools The S2I tooling comes with powerful generation tools to speed up the process of creating a new S2I image. The s2i create command produces all the necessary S2I scripts and testing tools along with the Makefile : USD s2i create _<image name>_ _<destination directory>_ The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing. Note The test/run script produced by the s2i create command requires that the sample application sources are inside the test/test-app directory. 4.4.3. Testing locally The easiest way to run the S2I image tests locally is to use the generated Makefile . If you did not use the s2i create command, you can copy the following Makefile template and replace the IMAGE_NAME parameter with your image name. Sample Makefile 4.4.4. Basic testing workflow The test script assumes you have already built the image you want to test. If required, first build the S2I image. Run one of the following commands: If you use Podman, run the following command: USD podman build -t <builder_image_name> If you use Docker, run the following command: USD docker build -t <builder_image_name> The following steps describe the default workflow to test S2I image builders: Verify the usage script is working: If you use Podman, run the following command: USD podman run <builder_image_name> . If you use Docker, run the following command: USD docker run <builder_image_name> . Build the image: USD s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_ Optional: if you support save-artifacts , run step 2 once again to verify that saving and restoring artifacts works properly. Run the container: If you use Podman, run the following command: USD podman run <output_application_image_name> If you use Docker, run the following command: USD docker run <output_application_image_name> Verify the container is running and the application is responding. Running these steps is generally enough to tell if the builder image is working as expected. 4.4.5. Using OpenShift Container Platform for building the image Once you have a Dockerfile and the other artifacts that make up your new S2I builder image, you can put them in a git repository and use OpenShift Container Platform to build and push the image. Define a Docker build that points to your repository. If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your S2I builder image GitHub repository. You can also use the ImageChangeTrigger to trigger a rebuild of your applications that are based on the S2I builder image you updated.
|
[
"RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y",
"RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y",
"FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile",
"FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y",
"RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory",
"LABEL io.openshift.tags mongodb,mongodb24,nosql",
"LABEL io.openshift.wants mongodb,redis",
"LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support",
"LABEL io.openshift.non-scalable true",
"LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4",
"#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd",
"#!/bin/bash run the application /opt/application/run.sh",
"#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd",
"#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF",
"s2i create _<image name>_ _<destination directory>_",
"IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run",
"podman build -t <builder_image_name>",
"docker build -t <builder_image_name>",
"podman run <builder_image_name> .",
"docker run <builder_image_name> .",
"s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_",
"podman run <output_application_image_name>",
"docker run <output_application_image_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/creating-images
|
probe::signal.sys_tkill
|
probe::signal.sys_tkill Name probe::signal.sys_tkill - Sending a kill signal to a thread Synopsis signal.sys_tkill Values sig_pid The PID of the process receiving the kill signal sig The specific signal sent to the process name Name of the probe point pid_name The name of the signal recipient sig_name A string representation of the signal task A task handle to the signal recipient Description The tkill call is analogous to kill(2), except that it also allows a process within a specific thread group to be targeted. Such processes are targeted through their unique thread IDs (TID).
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-sys-tkill
|
22.3. Identifying Useful Directory Server Features for Disaster Recovery
|
22.3. Identifying Useful Directory Server Features for Disaster Recovery The hardest part of a recovery is not the hardware; it is getting a reliable copy of the data in the server. There are three Directory Server features that are excellent tools for preparing data copies for disaster recovery: Backing up databases and verifying the backups regularly Multi-supplier replication, chaining, backing up databases, and monitoring the server with a named pipe script Chaining Additionally, monitoring the server with a named pipe script and with other Directory Server performance counters can be effective at catching and quickly responding to specific, critical events. 22.3.1. Backing up Directory Data for Disaster Recovery The most useful tool for disaster recovery is to do frequent backups of a directory instance. Archives can be stored on physical media, at different locations than the primary data center or on-site at a cold backup location. Backups can be automated to run regularly through cron jobs. For example, to create a backup of the ldap://server.example.com instance daily at 22:00 (10pm): The dsconf backup create command backs up the directory data without having to stop the server first. Note Red Hat recommends to back up the data on all servers in a multi-supplier replication environment. Backing up both directory databases and the directory configuration ( dse.ldif file) are covered in Section 6.3, "Backing up Directory Server" . 22.3.2. Multi-Supplier Replication for High-availability Multi-supplier replication is the best defense against losing a single server and, possibly, even an entire office or department. While a small number of servers are data suppliers, multiple servers all hold the same data - potentially dozens of suppliers and hubs in a single replication environment. This keeps information accessible to clients even if multiple servers go offline. Replication can be used to copy over data to servers and bring replacements online more quickly. Note To protect against data corruption being propagated through replication, frequently back up the database. Replication configuration also allows write operations to be referred to failover servers if the primary supplier is inaccessible. This means that write operations can proceed as normal from the client perspective, even when servers go offline. Example 22.1. Scenarios for Multi-Supplier Replication Replication is a versatile tool for disaster recovery in several scenarios: For a single server failure, all of the data stored on that instance is both accessible and retrievable from other servers. For the loss of an entire office or colocation facility, servers can be mirrored at an entirely different physical location (which is aided by Directory Server's wide area replication performance). With minimal effort, traffic can be redirected to the replicated site without having to bring new servers online. Configuring replication is covered in Chapter 15, Managing Replication . 22.3.3. Chaining Databases for High-availability Chaining is a configuration where a client sends a request to one server and it automatically forwards that request to another server to process. There can be multiple servers configured in the database link (or chain) to allow for automatic failover if one server is not available. Example 22.2. Scenarios for Chaining When chaining is combined with a list of failover servers, client traffic can be automatically redirected from a single server (or even group of servers) when they are offline. This does not help in recovery, but it helps manage the transition from primary to backup servers. Chaining databases is covered in Section 2.3, "Creating and Maintaining Database Links" .
|
[
"0 22 * * 1 /usr/sbin/dsconf -D \"cn=Directory Manager\" ldap://server.example.com backup create"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/features-for-recovery
|
Chapter 9. Scheduling NUMA-aware workloads
|
Chapter 9. Scheduling NUMA-aware workloads Learn about NUMA-aware scheduling and how you can use it to deploy high performance workloads in an OpenShift Container Platform cluster. The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads. 9.1. About NUMA-aware scheduling Introduction to NUMA Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone . For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone. Performance considerations NUMA architecture allows a CPU with multiple memory controllers to use any available memory across CPU complexes, regardless of where the memory is located. This allows for increased flexibility at the expense of performance. A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. Also, for I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application. High-performance workloads, such as telecommunications workloads, cannot operate to specification under these conditions. NUMA-aware scheduling NUMA-aware scheduling aligns the requested cluster compute resources (CPUs, memory, devices) in the same NUMA zone to process latency-sensitive or high-performance workloads efficiently. NUMA-aware scheduling also improves pod density per compute node for greater resource efficiency. Integration with Node Tuning Operator By integrating the Node Tuning Operator's performance profile with NUMA-aware scheduling, you can further configure CPU affinity to optimize performance for latency-sensitive workloads. Default scheduling logic The default OpenShift Container Platform pod scheduler scheduling logic considers the available resources of the entire compute node, not individual NUMA zones. If the most restrictive resource alignment is requested in the kubelet topology manager, error conditions can occur when admitting the pod to a node. Conversely, if the most restrictive resource alignment is not requested, the pod can be admitted to the node without proper resource alignment, leading to worse or unpredictable performance. For example, runaway pod creation with Topology Affinity Error statuses can occur when the pod scheduler makes suboptimal scheduling decisions for guaranteed pod workloads without knowing if the pod's requested resources are available. Scheduling mismatch decisions can cause indefinite pod startup delays. Also, depending on the cluster state and resource allocation, poor pod scheduling decisions can cause extra load on the cluster because of failed startup attempts. NUMA-aware pod scheduling diagram The NUMA Resources Operator deploys a custom NUMA resources secondary scheduler and other resources to mitigate against the shortcomings of the default OpenShift Container Platform pod scheduler. The following diagram provides a high-level overview of NUMA-aware pod scheduling. Figure 9.1. NUMA-aware scheduling overview NodeResourceTopology API The NodeResourceTopology API describes the available NUMA zone resources in each compute node. NUMA-aware scheduler The NUMA-aware secondary scheduler receives information about the available NUMA zones from the NodeResourceTopology API and schedules high-performance workloads on a node where it can be optimally processed. Node topology exporter The node topology exporter exposes the available NUMA zone resources for each compute node to the NodeResourceTopology API. The node topology exporter daemon tracks the resource allocation from the kubelet by using the PodResources API. PodResources API The PodResources API is local to each node and exposes the resource topology and available resources to the kubelet. Note The List endpoint of the PodResources API exposes exclusive CPUs allocated to a particular container. The API does not expose CPUs that belong to a shared pool. The GetAllocatableResources endpoint exposes allocatable resources available on a node. 9.2. NUMA resource scheduling strategies When scheduling high-performance workloads, the secondary scheduler can employ different strategies to determine which NUMA node within a chosen worker node will handle the workload. The supported strategies in OpenShift Container Platform include LeastAllocated , MostAllocated , and BalancedAllocation . Understanding these strategies helps optimize workload placement for performance and resource utilization. When a high-performance workload is scheduled in a NUMA-aware cluster, the following steps occur: The scheduler first selects a suitable worker node based on cluster-wide criteria. For example taints, labels, or resource availability. After a worker node is selected, the scheduler evaluates its NUMA nodes and applies a scoring strategy to decide which NUMA node will handle the workload. After a workload is scheduled, the selected NUMA node's resources are updated to reflect the allocation. The default strategy applied is the LeastAllocated strategy. This assigns workloads to the NUMA node with the most available resources that is the least utilized NUMA node. The goal of this strategy is to spread workloads across NUMA nodes to reduce contention and avoid hotspots. The following table summarizes the different strategies and their outcomes: Scoring strategy summary Table 9.1. Scoring strategy summary Strategy Description Outcome LeastAllocated Favors NUMA nodes with the most available resources. Spreads workloads to reduce contention and ensure headroom for high-priority tasks. MostAllocated Favors NUMA nodes with the least available resources. Consolidates workloads on fewer NUMA nodes, freeing others for energy efficiency. BalancedAllocation Favors NUMA nodes with balanced CPU and memory usage. Ensures even resource utilization, preventing skewed usage patterns. LeastAllocated strategy example The LeastAllocated is the default strategy. This strategy assigns workloads to the NUMA node with the most available resources, minimizing resource contention and spreading workloads across NUMA nodes. This reduces hotspots and ensures sufficient headroom for high-priority tasks. Assume a worker node has two NUMA nodes, and the workload requires 4 vCPUs and 8 GB of memory: Table 9.2. Example initial NUMA nodes state NUMA node Total CPUs Used CPUs Total memory (GB) Used memory (GB) Available resources NUMA 1 16 12 64 56 4 CPUs, 8 GB memory NUMA 2 16 6 64 24 10 CPUs, 40 GB memory Because NUMA 2 has more available resources compared to NUMA 1, the workload is assigned to NUMA 2. MostAllocated strategy example The MostAllocated strategy consolidates workloads by assigning them to the NUMA node with the least available resources, which is the most utilized NUMA node. This approach helps free other NUMA nodes for energy efficiency or critical workloads requiring full isolation. This example uses the "Example initial NUMA nodes state" values listed in the LeastAllocated section. The workload again requires 4 vCPUs and 8 GB memory. NUMA 1 has fewer available resources compared to NUMA 2, so the scheduler assigns the workload to NUMA 1, further utilizing its resources while leaving NUMA 2 idle or minimally loaded. BalancedAllocation strategy example The BalancedAllocation strategy assigns workloads to the NUMA node with the most balanced resource utilization across CPU and memory. The goal is to prevent imbalanced usage, such as high CPU utilization with underutilized memory. Assume a worker node has the following NUMA node states: Table 9.3. Example NUMA nodes initial state for BalancedAllocation NUMA node CPU usage Memory usage BalancedAllocation score NUMA 1 60% 55% High (more balanced) NUMA 2 80% 20% Low (less balanced) NUMA 1 has a more balanced CPU and memory utilization compared to NUMA 2 and therefore, with the BalancedAllocation strategy in place, the workload is assigned to NUMA 1. Additional resources Scheduling pods using a secondary scheduler Changing where high-performance workloads run 9.3. Installing the NUMA Resources Operator NUMA Resources Operator deploys resources that allow you to schedule NUMA-aware workloads and deployments. You can install the NUMA Resources Operator using the OpenShift Container Platform CLI or the web console. 9.3.1. Installing the NUMA Resources Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NUMA Resources Operator: Save the following YAML in the nro-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources Create the Namespace CR by running the following command: USD oc create -f nro-namespace.yaml Create the Operator group for the NUMA Resources Operator: Save the following YAML in the nro-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources Create the OperatorGroup CR by running the following command: USD oc create -f nro-operatorgroup.yaml Create the subscription for the NUMA Resources Operator: Save the following YAML in the nro-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.15" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f nro-sub.yaml Verification Verify that the installation succeeded by inspecting the CSV resource in the openshift-numaresources namespace. Run the following command: USD oc get csv -n openshift-numaresources Example output NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.15.2 numaresources-operator 4.15.2 Succeeded 9.3.2. Installing the NUMA Resources Operator using the web console As a cluster administrator, you can install the NUMA Resources Operator using the web console. Procedure Create a namespace for the NUMA Resources Operator: In the OpenShift Container Platform web console, click Administration Namespaces . Click Create Namespace , enter openshift-numaresources in the Name field, and then click Create . Install the NUMA Resources Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose numaresources-operator from the list of available Operators, and then click Install . In the Installed Namespaces field, select the openshift-numaresources namespace, and then click Install . Optional: Verify that the NUMA Resources Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that NUMA Resources Operator is listed in the openshift-numaresources namespace with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the default project. 9.4. Scheduling NUMA-aware workloads Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. The NUMA-aware scheduler deploys workloads based on available node NUMA resources and with respect to any performance profile settings applied to the node. The combination of NUMA-aware deployments, and the performance profile of the workload, ensures that workloads are scheduled in a way that maximizes performance. For the NUMA Resources Operator to be fully operational, you must deploy the NUMAResourcesOperator custom resource and the NUMA-aware secondary pod scheduler. 9.4.1. Creating the NUMAResourcesOperator custom resource When you have installed the NUMA Resources Operator, then create the NUMAResourcesOperator custom resource (CR) that instructs the NUMA Resources Operator to install all the cluster infrastructure needed to support the NUMA-aware scheduler, including daemon sets and APIs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the NUMAResourcesOperator custom resource: Save the following minimal required YAML file example as nrop.yaml : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 1 This must match the MachineConfigPool resource that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool resource named worker-cnf that designates a set of nodes expected to run telecommunications workloads. Each NodeGroup must match exactly one MachineConfigPool . Configurations where NodeGroup matches more than one MachineConfigPool are not supported. Create the NUMAResourcesOperator CR by running the following command: USD oc create -f nrop.yaml Note Creating the NUMAResourcesOperator triggers a reboot on the corresponding machine config pool and therefore the affected node. Optional: To enable NUMA-aware scheduling for multiple machine config pools (MCPs), define a separate NodeGroup for each pool. For example, define three NodeGroups for worker-cnf , worker-ht , and worker-other , in the NUMAResourcesOperator CR as shown in the following example: Example YAML definition for a NUMAResourcesOperator CR with multiple NodeGroups apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other Verification Verify that the NUMA Resources Operator deployed successfully by running the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io Example output NAME AGE numaresourcesoperator 27s After a few minutes, run the following command to verify that the required resources deployed successfully: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s 9.4.2. Deploying the NUMA-aware secondary pod scheduler After installing the NUMA Resources Operator, deploy the NUMA-aware secondary pod scheduler to optimize pod placement for improved performance and reduced latency in NUMA-based systems. Procedure Create the NUMAResourcesScheduler custom resource that deploys the NUMA-aware custom pod scheduler: Save the following minimal required YAML in the nro-scheduler.yaml file: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.15" 1 1 In a disconnected environment, make sure to configure the resolution of this image by completing one of the following actions: Creating an ImageTagMirrorSet custom resource (CR). For more information, see "Configuring image registry repository mirroring" in the "Additional resources" section. Setting the URL to the disconnected registry. Create the NUMAResourcesScheduler CR by running the following command: USD oc create -f nro-scheduler.yaml After a few seconds, run the following command to confirm the successful deployment of the required resources: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m Additional resources Configuring image registry repository mirroring 9.4.3. Configuring a single NUMA node policy The NUMA Resources Operator requires a single NUMA node policy to be configured on the cluster. This can be achieved in two ways: by creating and applying a performance profile, or by configuring a KubeletConfig. Note The preferred way to configure a single NUMA node policy is to apply a performance profile. You can use the Performance Profile Creator (PPC) tool to create the performance profile. If a performance profile is created on the cluster, it automatically creates other tuning components like KubeletConfig and the tuned profile. For more information about creating a performance profile, see "About the Performance Profile Creator" in the "Additional resources" section. Additional resources About the Performance Profile Creator 9.4.4. Sample performance profile This example YAML shows a performance profile created by using the performance profile creator (PPC) tool: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "3" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: "" 1 nodeSelector: node-role.kubernetes.io/worker: "" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true 1 This should match the MachineConfigPool that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool named worker-cnf that designates a set of nodes that run telecommunications workloads. 2 The topologyPolicy must be set to single-numa-node . Ensure that this is the case by setting the topology-manager-policy argument to single-numa-node when running the PPC tool. 9.4.5. Creating a KubeletConfig CRD The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a KubeletConfig custom resource (CR), as shown in the following procedure. Procedure Create the KubeletConfig custom resource (CR) that configures the pod admittance policy for the machine profile: Save the following YAML in the nro-kubeletconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 kubeletConfig: cpuManagerPolicy: "static" 2 cpuManagerReconcilePeriod: "5s" reservedSystemCPUs: "0,1" 3 memoryManagerPolicy: "Static" 4 evictionHard: memory.available: "100Mi" kubeReserved: memory: "512Mi" reservedMemory: - numaNode: 0 limits: memory: "1124Mi" systemReserved: memory: "512Mi" topologyManagerPolicy: "single-numa-node" 5 1 Adjust this label to match the machineConfigPoolSelector in the NUMAResourcesOperator CR. 2 For cpuManagerPolicy , static must use a lowercase s . 3 Adjust this based on the CPU on your nodes. 4 For memoryManagerPolicy , Static must use an uppercase S . 5 topologyManagerPolicy must be set to single-numa-node . Create the KubeletConfig CR by running the following command: USD oc create -f nro-kubeletconfig.yaml Note Applying performance profile or KubeletConfig automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in KubeletConfig that address the node group. 9.4.6. Scheduling workloads with the NUMA-aware scheduler Now that topo-aware-scheduler is installed, the NUMAResourcesOperator and NUMAResourcesScheduler CRs are applied and your cluster has a matching performance profile or kubeletconfig , you can schedule workloads with the NUMA-aware scheduler using deployment CRs that specify the minimum required resources to process the workload. The following example deployment uses NUMA-aware scheduling for a sample workload. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the name of the NUMA-aware scheduler that is deployed in the cluster by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output "topo-aware-scheduler" Create a Deployment CR that uses scheduler named topo-aware-scheduler , for example: Save the following YAML in the nro-deployment.yaml file: apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: "100Mi" cpu: "10" requests: memory: "100Mi" cpu: "10" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: [ "while true; do sleep 1h; done;" ] resources: limits: memory: "100Mi" cpu: "8" requests: memory: "100Mi" cpu: "8" 1 schedulerName must match the name of the NUMA-aware scheduler that is deployed in your cluster, for example topo-aware-scheduler . Create the Deployment CR by running the following command: USD oc create -f nro-deployment.yaml Verification Verify that the deployment was successful: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m Verify that the topo-aware-scheduler is scheduling the deployed pod by running the following command: USD oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1 Note Deployments that request more resources than is available for scheduling will fail with a MinimumReplicasUnavailable error. The deployment succeeds when the required resources become available. Pods remain in the Pending state until the required resources are available. Verify that the expected allocated resources are listed for the node. Identify the node that is running the deployment pod by running the following command: USD oc get pods -n openshift-numaresources -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none> Run the following command with the name of that node that is running the deployment pod. USD oc describe noderesourcetopologies.topology.node.k8s.io worker-1 Example output ... Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node 1 The Available capacity is reduced because of the resources that have been allocated to the guaranteed pod. Resources consumed by guaranteed pods are subtracted from the available node resources listed under noderesourcetopologies.topology.node.k8s.io . Resource allocations for pods with a Best-effort or Burstable quality of service ( qosClass ) are not reflected in the NUMA node resources under noderesourcetopologies.topology.node.k8s.io . If a pod's consumed resources are not reflected in the node resource calculation, verify that the pod has qosClass of Guaranteed and the CPU request is an integer value, not a decimal value. You can verify the that the pod has a qosClass of Guaranteed by running the following command: USD oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath="{ .status.qosClass }" Example output Guaranteed 9.5. Optional: Configuring polling operations for NUMA resources updates The daemons controlled by the NUMA Resources Operator in their nodeGroup poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the spec.nodeGroups specification in the NUMAResourcesOperator custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behaviour and troubleshoot suboptimal scheduling decisions. The configuration options are the following: infoRefreshMode : Determines the trigger condition for polling the kubelet. The NUMA Resources Operator reports the resulting information to the API server. infoRefreshPeriod : Determines the duration between polling updates. podsFingerprinting : Determines if point-in-time information for the current set of pods running on a node is exposed in polling updates. Note podsFingerprinting is enabled by default. podsFingerprinting is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler CR. The cacheResyncPeriod specification helps to report more exact resource availability by monitoring pending resources on nodes. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Configure the spec.nodeGroups specification in your NUMAResourcesOperator CR: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker 1 Valid values are Periodic , Events , PeriodicAndEvents . Use Periodic to poll the kubelet at intervals that you define in infoRefreshPeriod . Use Events to poll the kubelet at every pod lifecycle event. Use PeriodicAndEvents to enable both methods. 2 Define the polling interval for Periodic or PeriodicAndEvents refresh modes. The field is ignored if the refresh mode is Events . 3 Valid values are Enabled , Disabled , and EnabledExclusiveResources . Setting to Enabled is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler . Verification After you deploy the NUMA Resources Operator, verify that the node group configurations were applied by running the following command: USD oc get numaresop numaresourcesoperator -o json | jq '.status' Example output ... "config": { "infoRefreshMode": "Periodic", "infoRefreshPeriod": "10s", "podsFingerprinting": "Enabled" }, "name": "worker" ... 9.6. Troubleshooting NUMA-aware scheduling To troubleshoot common problems with NUMA-aware pod scheduling, perform the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Verify that the noderesourcetopologies CRD is deployed in the cluster by running the following command: USD oc get crd | grep noderesourcetopologies Example output NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z Check that the NUMA-aware scheduler name matches the name specified in your NUMA-aware workloads by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Verify that NUMA-aware schedulable nodes have the noderesourcetopologies CR applied to them. Run the following command: USD oc get noderesourcetopologies.topology.node.k8s.io Example output NAME AGE compute-0.example.com 17h compute-1.example.com 17h Note The number of nodes should equal the number of worker nodes that are configured by the machine config pool ( mcp ) worker definition. Verify the NUMA zone granularity for all schedulable nodes by running the following command: USD oc get noderesourcetopologies.topology.node.k8s.io -o yaml Example output apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:38Z" generation: 63760 name: worker-0 resourceVersion: "8450223" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262352048128" available: "262352048128" capacity: "270107316224" name: memory - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269231067136" available: "269231067136" capacity: "270573244416" name: memory - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:37Z" generation: 62061 name: worker-1 resourceVersion: "8450129" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262391033856" available: "262391033856" capacity: "270146301952" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269192085504" available: "269192085504" capacity: "270534262784" name: memory type: Node kind: List metadata: resourceVersion: "" selfLink: "" 1 Each stanza under zones describes the resources for a single NUMA zone. 2 resources describes the current state of the NUMA zone resources. Check that resources listed under items.zones.resources.available correspond to the exclusive NUMA zone resources allocated to each guaranteed pod. 9.6.1. Reporting more exact resource availability Enable the cacheResyncPeriod specification to help the NUMA Resources Operator report more exact resource availability by monitoring pending resources on nodes and synchronizing this information in the scheduler cache at a defined interval. This also helps to minimize Topology Affinity Error errors because of sub-optimal scheduling decisions. The lower the interval, the greater the network load. The cacheResyncPeriod specification is disabled by default. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-cacheresync.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.15" cacheResyncPeriod: "5s" 1 1 Enter an interval value in seconds for synchronization of the scheduler cache. A value of 5s is typical for most implementations. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-cacheresync.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler show the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.6.2. Changing where high-performance workloads run The NUMA-aware secondary scheduler is responsible for scheduling high-performance workloads on a worker node and within a NUMA node where the workloads can be optimally processed. By default, the secondary scheduler assigns workloads to the NUMA node within the chosen worker node that has the most available resources. If you want to change where the workloads run, you can add the scoringStrategy setting to the NUMAResourcesScheduler custom resource and set its value to either MostAllocated or BalancedAllocation . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource by using the following steps: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-mostallocated.yaml . This example changes the scoringStrategy to MostAllocated : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v{product-version}" scoringStrategy: type: "MostAllocated" 1 1 If the scoringStrategy configuration is omitted, the default of LeastAllocated applies. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-mostallocated.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification Check that the NUMA-aware scheduler was successfully deployed by using the following steps: Run the following command to check that the custom resource definition (CRD) is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Verify that the ScoringStrategy has been applied correctly by running the following command to check the relevant ConfigMap resource for the scheduler: USD oc get -n openshift-numaresources cm topo-aware-scheduler-config -o yaml | grep scoring -A 1 Example output scoringStrategy: type: MostAllocated 9.6.3. Checking the NUMA-aware scheduler logs Troubleshoot problems with the NUMA-aware scheduler by reviewing the logs. If required, you can increase the scheduler log level by modifying the spec.logLevel field of the NUMAResourcesScheduler resource. Acceptable values are Normal , Debug , and Trace , with Trace being the most verbose option. Note To change the log level of the secondary scheduler, delete the running scheduler resource and re-deploy it with the changed log level. The scheduler is unavailable for scheduling new workloads during this downtime. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 90m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-debug.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.15" logLevel: Debug Create the updated Debug logging NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-debug.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler shows the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.6.4. Troubleshooting the resource topology exporter Troubleshoot noderesourcetopologies objects where unexpected results are occurring by inspecting the corresponding resource-topology-exporter logs. Note It is recommended that NUMA resource topology exporter instances in the cluster are named for nodes they refer to. For example, a worker node with the name worker should have a corresponding noderesourcetopologies object called worker . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the daemonsets managed by the NUMA Resources Operator. Each daemonset has a corresponding nodeGroup in the NUMAResourcesOperator CR. Run the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath="{.status.daemonsets[0]}" Example output {"name":"numaresourcesoperator-worker","namespace":"openshift-numaresources"} Get the label for the daemonset of interest using the value for name from the step: USD oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath="{.spec.selector.matchLabels}" Example output {"name":"resource-topology"} Get the pods using the resource-topology label by running the following command: USD oc get pods -n openshift-numaresources -l name=resource-topology -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com Examine the logs of the resource-topology-exporter container running on the worker pod that corresponds to the node you are troubleshooting. Run the following command: USD oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c Example output I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: "0": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved "0-1" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online "0-103" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable "2-103" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi 9.6.5. Correcting a missing resource topology exporter config map If you install the NUMA Resources Operator in a cluster with misconfigured cluster settings, in some circumstances, the Operator is shown as active but the logs of the resource topology exporter (RTE) daemon set pods show that the configuration for the RTE is missing, for example: Info: couldn't find configuration in "/etc/resource-topology-exporter/config.yaml" This log message indicates that the kubeletconfig with the required configuration was not properly applied in the cluster, resulting in a missing RTE configmap . For example, the following cluster is missing a numaresourcesoperator-worker configmap custom resource (CR): USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h In a correctly configured cluster, oc get configmap also returns a numaresourcesoperator-worker configmap CR. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Compare the values for spec.machineConfigPoolSelector.matchLabels in kubeletconfig and metadata.labels in the MachineConfigPool ( mcp ) worker CR using the following commands: Check the kubeletconfig labels by running the following command: USD oc get kubeletconfig -o yaml Example output machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled Check the mcp labels by running the following command: USD oc get mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" The cnf-worker-tuning: enabled label is not present in the MachineConfigPool object. Edit the MachineConfigPool CR to include the missing label, for example: USD oc edit mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" cnf-worker-tuning: enabled Apply the label changes and wait for the cluster to apply the updated configuration. Run the following command: Verification Check that the missing numaresourcesoperator-worker configmap CR is applied: USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h 9.6.6. Collecting NUMA Resources Operator data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with the NUMA Resources Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To collect NUMA Resources Operator data with must-gather , you must specify the NUMA Resources Operator must-gather image. USD oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.15
|
[
"apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources",
"oc create -f nro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"oc create -f nro-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.15\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nro-sub.yaml",
"oc get csv -n openshift-numaresources",
"NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.15.2 numaresources-operator 4.15.2 Succeeded",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1",
"oc create -f nrop.yaml",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other",
"oc get numaresourcesoperators.nodetopology.openshift.io",
"NAME AGE numaresourcesoperator 27s",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.15\" 1",
"oc create -f nro-scheduler.yaml",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5",
"oc create -f nro-kubeletconfig.yaml",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"\"topo-aware-scheduler\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"",
"oc create -f nro-deployment.yaml",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m",
"oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1",
"oc get pods -n openshift-numaresources -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>",
"oc describe noderesourcetopologies.topology.node.k8s.io worker-1",
"Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node",
"oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"",
"Guaranteed",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker",
"oc get numaresop numaresourcesoperator -o json | jq '.status'",
"\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"",
"oc get crd | grep noderesourcetopologies",
"NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"oc get noderesourcetopologies.topology.node.k8s.io",
"NAME AGE compute-0.example.com 17h compute-1.example.com 17h",
"oc get noderesourcetopologies.topology.node.k8s.io -o yaml",
"apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 92m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.15\" cacheResyncPeriod: \"5s\" 1",
"oc create -f nro-scheduler-cacheresync.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 92m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v{product-version}\" scoringStrategy: type: \"MostAllocated\" 1",
"oc create -f nro-scheduler-mostallocated.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get -n openshift-numaresources cm topo-aware-scheduler-config -o yaml | grep scoring -A 1",
"scoringStrategy: type: MostAllocated",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 90m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.15\" logLevel: Debug",
"oc create -f nro-scheduler-debug.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"",
"{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}",
"oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"",
"{\"name\":\"resource-topology\"}",
"oc get pods -n openshift-numaresources -l name=resource-topology -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com",
"oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c",
"I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi",
"Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc get kubeletconfig -o yaml",
"machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled",
"oc get mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"oc edit mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.15"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/cnf-numa-aware-scheduling
|
Chapter 19. Jakarta Server Faces configuration
|
Chapter 19. Jakarta Server Faces configuration The jsf subsystem enables the installation of multiple Jakarta Server Faces implementations on the same JBoss EAP server instance. You can install a version of Sun Mojarra or Apache MyFaces that implements the Jakarta Server Faces 4.0 specification or later. The feature pack can only be used to install the Apache MyFaces implementation. Note Only the Jakarta Server Faces implementation included with JBoss EAP is fully supported. 19.1. Installing a Jakarta Server Faces implementation JBoss EAP supports provisioning a server with only the necessary features by using the JBoss EAP Installation Manager, which delivers these features as feature packs. Prerequisites You have installed JBoss EAP. Procedure Create the myfaces-manifest.yaml file with the following content: schemaVersion: "1.0.0" name: "MyFaces manifest" id: "myfaces" streams: - groupId: "org.apache.myfaces.core" artifactId: "myfaces-api" version: "4.0.2" - groupId: "org.apache.myfaces.core" artifactId: "myfaces-impl" version: "4.0.2" Add the MyFaces manifest by using the following command: Deploy the MyFaces Maven manifest to your local Maven repository by using the following command: Provision a server using the MyFaces feature pack by using the following command: Start the server. Verification Use the following CLI command to verify that the new Jakarta Server Faces implementation has been installed successfully: 19.2. Changing the default Jakarta Server Faces implementation The multi-Jakarta Server Faces feature includes the default-jsf-impl-slot attribute in the jsf subsystem, which enables you to change the default Jakarta Server Faces implementation. Prerequisites You have the multiple Jakarta Server Faces implementations installed on the server. Procedure Use the write-attribute command to set the value of the default-jsf-impl-slot attribute to one of the active Jakarta Server Faces implementations: Replace JSF_IMPLEMENTATION with the name of the Jakarta Server Faces implementation you want to set as default. Restart the JBoss EAP server for the change to take effect. Verification Identify the available Jakarta Server Faces implementations by using the following command: Expected output 19.3. Jakarta Server Faces application configuration for non-default implementation To configure a Jakarta Server Faces application to use a Jakarta Server Faces implementation other than the default, add the org.jboss.jbossfaces.JSF_CONFIG_NAME context parameter to the web.xml file. This parameter instructs the jsf subsystem to apply the specified Jakarta Server Faces implementation when deploying the application. For example, if you want to use MyFaces 4.0.0 in your application, include the following context parameter in the web.xml file: <context-param> <param-name>org.jboss.jbossfaces.JSF_CONFIG_NAME</param-name> <param-value>myfaces-4.0.0</param-value> </context-param> If a Jakarta Server Faces application does not include this context parameter, the 'jsf' subsystem will use the default Jakarta Server Faces implementation. 19.4. Disallowing DOCTYPE declarations You can configure the jsf subsystem to disallow DOCTYPE declarations in Jakarta Server Faces deployments. This setting improves security by preventing the use of external entities. Prerequisites You have management CLI access to configure the jsf subsystem. Procedure Disallow DOCTYPE declarations in all Jakarta Server Faces deployments by using the following command: Restart the JBoss EAP server for the changes to take effect: To allow DOCTYPE declarations for a specific Jakarta Server Faces deployment, add the com.sun.faces.disallowDoctypeDecl context parameter to the deployment's web.xml file with the following configuration: <context-param> <param-name>com.sun.faces.disallowDoctypeDecl</param-name> <param-value>false</param-value> </context-param>
|
[
"schemaVersion: \"1.0.0\" name: \"MyFaces manifest\" id: \"myfaces\" streams: - groupId: \"org.apache.myfaces.core\" artifactId: \"myfaces-api\" version: \"4.0.2\" - groupId: \"org.apache.myfaces.core\" artifactId: \"myfaces-impl\" version: \"4.0.2\"",
"USDJBOSS_HOME/bin/jboss-eap-installation-manager.sh channel add --channel-name=myfaces --manifest=myfaces-manifest.yaml --repositories=https://repo1.maven.org/maven2/",
"mvn deploy:deploy-file -Dfile=myfaces-manifest.yaml -DgroupId=org.apache.myfaces.channel -DartifactId=myfaces -Dclassifier=manifest -Dpackaging=yaml -Dversion=4.0.2 -Durl=file://USDHOME/.m2/repository",
"USDJBOSS_HOME/bin/jboss-eap-installation-manager.sh fp add --fpl=org.jboss.eap:eap-myfaces-feature-pack --layers=myfaces",
"[standalone@localhost:9990 /] /subsystem=jsf:list-active-jsf-impls()",
"/subsystem=jsf:write-attribute(name=default-jsf-impl-slot,value= JSF_IMPLEMENTATION )",
"reload",
"/subsystem=jsf:read-attribute(name=default-jsf-impl-slot)",
"{ \"outcome\" => \"success\", \"result\" => \"myfaces\" }",
"<context-param> <param-name>org.jboss.jbossfaces.JSF_CONFIG_NAME</param-name> <param-value>myfaces-4.0.0</param-value> </context-param>",
"/subsystem=jsf:write-attribute(name=disallow-doctype-decl, value=true)",
"reload",
"<context-param> <param-name>com.sun.faces.disallowDoctypeDecl</param-name> <param-value>false</param-value> </context-param>"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/jakarta-server-faces-configuration_jakarta-connectors-management
|
Chapter 101. Managing IdM servers by using Ansible
|
Chapter 101. Managing IdM servers by using Ansible You can use Red Hat Ansible Engine to manage the servers in your Identity Management (IdM) topology. You can use the server module in the ansible-freeipa package to check the presence or absence of a server in the IdM topology. You can also hide any replica or make a replica visible. The section contains the following topics: Checking that an IdM server is present by using Ansible Ensuring that an IdM server is absent from an IdM topology by using Ansible Ensuring the absence of an IdM server despite hosting a last IdM server role Ensuring that an IdM server is absent but not necessarily disconnected from other IdM servers Ensuring that an existing IdM server is hidden using an Ansible playbook Ensuring that an existing IdM server is visible using an Ansible playbook Ensuring that an existing IdM server has an IdM DNS location assigned Ensuring that an existing IdM server has no IdM DNS location assigned 101.1. Checking that an IdM server is present by using Ansible You can use the ipaserver ansible-freeipa module in an Ansible playbook to verify that an Identity Management (IdM) server exists. Note The ipaserver Ansible module does not install the IdM server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Run the Ansible playbook and specify the playbook file and the inventory file: Additional resources Installing an Identity Management server using an Ansible playbook The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.2. Ensuring that an IdM server is absent from an IdM topology by using Ansible Use an Ansible playbook to ensure an Identity Management (IdM) server does not exist in an IdM topology, even as a host. In contrast to the ansible-freeipa ipaserver role, the ipaserver module used in this playbook does not uninstall IdM services from the server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-absent.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-absent-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Ensure that the state variable is set to absent . Run the Ansible playbook and specify the playbook file and the inventory file: Make sure all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. Additional resources Uninstalling an IdM server The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.3. Ensuring the absence of an IdM server despite hosting a last IdM server role You can use Ansible to ensure that an Identity Management (IdM) server is absent even if the last IdM service instance is running on the server. A certificate authority (CA), key recovery authority (KRA), or DNS server are all examples of IdM services. Warning If you remove the last server that serves as a CA, KRA, or DNS server, you disrupt IdM functionality seriously. You can manually check which services are running on which IdM servers with the ipa service-find command. The principal name of a CA server is dogtag/ server_name / REALM_NAME . In contrast to the ansible-freeipa ipaserver role, the ipaserver module used in this playbook does not uninstall IdM services from the server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-absent-ignore-last-of-role.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-absent-ignore-last-of-role-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Ensure that the ignore_last_of_role variable is set to true . Set the state variable to absent . Run the Ansible playbook and specify the playbook file and the inventory file: Make sure all name server (NS) DNS records that point to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. Additional resources Uninstalling an IdM server The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.4. Ensuring that an IdM server is absent but not necessarily disconnected from other IdM servers If you are removing an Identity Management (IdM) server from the topology, you can keep its replication agreements intact with an Ansible playbook. The playbook also ensures that the IdM server does not exist in IdM, even as a host. Important Ignoring a server's replication agreements when removing it is only recommended when the other servers are dysfunctional servers that you are planning to remove anyway. Removing a server that serves as a central point in the topology can split your topology into two disconnected clusters. You can remove a dysfunctional server from the topology with the ipa server-del command. Note If you remove the last server that serves as a certificate authority (CA), key recovery authority (KRA), or DNS server, you seriously disrupt the Identity Management (IdM) functionality. To prevent this problem, the playbook makes sure these services are running on another server in the domain before it uninstalls a server that serves as a CA, KRA, or DNS server. In contrast to the ansible-freeipa ipaserver role, the ipaserver module used in this playbook does not uninstall IdM services from the server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-absent-ignore_topology_disconnect.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-absent-ignore_topology_disconnect-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Ensure that the ignore_topology_disconnect variable is set to true . Ensure that the state variable is set to absent . Run the Ansible playbook and specify the playbook file and the inventory file: Optional: Make sure all name server (NS) DNS records pointing to server123.idm.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS. Additional resources Uninstalling an IdM server The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory. 101.5. Ensuring that an existing IdM server is hidden using an Ansible playbook Use the ipaserver ansible-freeipa module in an Ansible playbook to ensure that an existing Identity Management (IdM) server is hidden. Note that this playbook does not install the IdM server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-hidden.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-hidden-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Ensure that the hidden variable is set to True . Run the Ansible playbook and specify the playbook file and the inventory file: Additional resources Installing an Identity Management server using an Ansible playbook The hidden replica mode The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.6. Ensuring that an existing IdM server is visible by using an Ansible playbook Use the ipaserver ansible-freeipa module in an Ansible playbook to ensure that an existing Identity Management (IdM) server is visible. Note that this playbook does not install the IdM server. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-not-hidden.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-not-hidden-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to the FQDN of the server. The FQDN of the example server is server123.idm.example.com . Ensure that the hidden variable is set to no . Run the Ansible playbook and specify the playbook file and the inventory file: Additional resources Installing an Identity Management server using an Ansible playbook The hidden replica mode The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.7. Ensuring that an existing IdM server has an IdM DNS location assigned Use the ipaserver ansible-freeipa module in an Ansible playbook to ensure that an existing Identity Management (IdM) server is assigned a specific IdM DNS location. Note that the ipaserver Ansible module does not install the IdM server. Prerequisites You know the IdM admin password. The IdM DNS location exists. The example location is germany . You have root access to the server. The example server is server123.idm.example.com . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-location.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-location-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to server123.idm.example.com . Set the location variable to germany . This is the modified Ansible playbook file for the current example: Run the Ansible playbook and specify the playbook file and the inventory file: Connect to server123.idm.example.com as root using SSH : Restart the named-pkcs11 service on the server for the updates to take effect immediately: Additional resources Installing an Identity Management server using an Ansible playbook Using Ansible to ensure an IdM location is present The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory 101.8. Ensuring that an existing IdM server has no IdM DNS location assigned Use the ipaserver ansible-freeipa module in an Ansible playbook to ensure that an existing Identity Management (IdM) server has no IdM DNS location assigned to it. Do not assign a DNS location to servers that change geographical location frequently. Note that the playbook does not install the IdM server. Prerequisites You know the IdM admin password. You have root access to the server. The example server is server123.idm.example.com . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The SSH connection from the control node to the IdM server defined in the inventory file is working correctly. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the server-no-location.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/server/ directory: Open the server-no-location-copy.yml file for editing. Adapt the file by setting the following variables in the ipaserver task section and save the file: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to server123.idm.example.com . Ensure that the location variable is set to "" . Run the Ansible playbook and specify the playbook file and the inventory file: Connect to server123.idm.example.com as root using SSH : Restart the named-pkcs11 service on the server for the updates to take effect immediately: Additional resources Installing an Identity Management server using an Ansible playbook Using Ansible to manage DNS locations in IdM The README-server.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/server directory
|
[
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-present.yml server-present-copy.yml",
"--- - name: Server present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com is present ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-absent.yml server-absent-copy.yml",
"--- - name: Server absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com is absent ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-absent-ignore-last-of-role.yml server-absent-ignore-last-of-role-copy.yml",
"--- - name: Server absent with last of role skip example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server \"server123.idm.example.com\" is absent with last of role skip ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com ignore_last_of_role: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-absent-ignore-last-of-role-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-absent-ignore_topology_disconnect.yml server-absent-ignore_topology_disconnect-copy.yml",
"--- - name: Server absent with ignoring topology disconnects example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server \"server123.idm.example.com\" with ignoring topology disconnects ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com ignore_topology_disconnect: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-absent-ignore_topology_disconnect-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-hidden.yml server-hidden-copy.yml",
"--- - name: Server hidden example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com is hidden ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com hidden: True",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-hidden-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-not-hidden.yml server-not-hidden-copy.yml",
"--- - name: Server not hidden example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com is not hidden ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com hidden: no",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-not-hidden-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-location.yml server-location-copy.yml",
"--- - name: Server enabled example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com with location \"germany\" is present ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com location: germany",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-location-copy.yml",
"ssh [email protected]",
"systemctl restart named-pkcs11",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/server/server-no-location.yml server-no-location-copy.yml",
"--- - name: Server no location example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure server server123.idm.example.com is present with no location ipaserver: ipaadmin_password: \"{{ ipaadmin_password }}\" name: server123.idm.example.com location: \"\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory server-no-location-copy.yml",
"ssh [email protected]",
"systemctl restart named-pkcs11"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-idm-servers-by-using-ansible_configuring-and-managing-idm
|
Logging
|
Logging OpenShift Container Platform 4.18 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/logging/index
|
Chapter 16. Scanning pods for vulnerabilities
|
Chapter 16. Scanning pods for vulnerabilities Using the Red Hat Quay Container Security Operator, you can access vulnerability scan results from the OpenShift Container Platform web console for container images used in active pods on the cluster. The Red Hat Quay Container Security Operator: Watches containers associated with pods on all or specified namespaces Queries the container registry where the containers came from for vulnerability information, provided an image's registry is running image scanning (such as Quay.io or a Red Hat Quay registry with Clair scanning) Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API Using the instructions here, the Red Hat Quay Container Security Operator is installed in the openshift-operators namespace, so it is available to all namespaces on your OpenShift Container Platform cluster. 16.1. Installing the Red Hat Quay Container Security Operator You can install the Red Hat Quay Container Security Operator from the OpenShift Container Platform web console Operator Hub, or by using the CLI. Prerequisites You have installed the oc CLI. You have administrator privileges to the OpenShift Container Platform cluster. You have containers that come from a Red Hat Quay or Quay.io registry running on your cluster. Procedure You can install the Red Hat Quay Container Security Operator by using the OpenShift Container Platform web console: On the web console, navigate to Operators OperatorHub and select Security . Select the Red Hat Quay Container Security Operator Operator, and then select Install . On the Red Hat Quay Container Security Operator page, select Install . Update channel , Installation mode , and Update approval are selected automatically. The Installed Namespace field defaults to openshift-operators . You can adjust these settings as needed. Select Install . The Red Hat Quay Container Security Operator appears after a few moments on the Installed Operators page. Optional: You can add custom certificates to the Red Hat Quay Container Security Operator. For example, create a certificate named quay.crt in the current directory. Then, run the following command to add the custom certificate to the Red Hat Quay Container Security Operator: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators Optional: If you added a custom certificate, restart the Red Hat Quay Container Security Operator pod for the new certificates to take effect. Alternatively, you can install the Red Hat Quay Container Security Operator by using the CLI: Retrieve the latest version of the Container Security Operator and its channel by entering the following command: USD oc get packagemanifests container-security-operator \ -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{"\n"}{end}' \ | awk '{print "STARTING_CSV=" USD1 " CHANNEL=" USD2 }' \ | sort -Vr \ | head -1 Example output STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8 Using the output from the command, create a Subscription custom resource for the Red Hat Quay Container Security Operator and save it as container-security-operator.yaml . For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2 1 Specify the value you obtained in the step for the spec.channel parameter. 2 Specify the value you obtained in the step for the spec.startingCSV parameter. Enter the following command to apply the configuration: USD oc apply -f container-security-operator.yaml Example output subscription.operators.coreos.com/container-security-operator created 16.2. Using the Red Hat Quay Container Security Operator The following procedure shows you how to use the Red Hat Quay Container Security Operator. Prerequisites You have installed the Red Hat Quay Container Security Operator. Procedure On the OpenShift Container Platform web console, navigate to Home Overview . Under the Status section, Image Vulnerabilities provides the number of vulnerabilities found. Click Image Vulnerabilities to reveal the Image Vulnerabilities breakdown tab, which details the severity of the vulnerabilities, whether the vulnerabilities can be fixed, and the total number of vulnerabilities. You can address detected vulnerabilities in one of two ways: Select a link under the Vulnerabilities section. This takes you to the container registry that the container came from, where you can see information about the vulnerability. Select the namespace link. This takes you to the Image Manifest Vulnerabilities page, where you can see the name of the selected image and all of the namespaces where that image is running. After you have learned what images are vulnerable, how to fix those vulnerabilities, and the namespaces that the images are being run in, you can improve security by performing the following actions: Alert anyone in your organization who is running the image and request that they correct the vulnerability. Stop the images from running by deleting the deployment or other object that started the pod that the image is in. Note If you delete the pod, it might take several minutes for the vulnerability information to reset on the dashboard. 16.3. Querying image vulnerabilities from the CLI Using the oc command, you can display information about vulnerabilities detected by the Red Hat Quay Container Security Operator. Prerequisites You have installed the Red Hat Quay Container Security Operator on your OpenShift Container Platform instance. Procedure Enter the following command to query for detected container image vulnerabilities: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s To display details for a particular vulnerability, append the vulnerability name and its namespace to the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace mynamespace sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... 16.4. Uninstalling the Red Hat Quay Container Security Operator To uninstall the Container Security Operator, you must uninstall the Operator and delete the imagemanifestvulns.secscan.quay.redhat.com custom resource definition (CRD). Procedure On the OpenShift Container Platform web console, click Operators Installed Operators . Click the menu of the Container Security Operator. Click Uninstall Operator . Confirm your decision by clicking Uninstall in the popup window. Use the CLI to delete the imagemanifestvulns.secscan.quay.redhat.com CRD. Remove the imagemanifestvulns.secscan.quay.redhat.com custom resource definition by entering the following command: USD oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com Example output customresourcedefinition.apiextensions.k8s.io "imagemanifestvulns.secscan.quay.redhat.com" deleted
|
[
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get packagemanifests container-security-operator -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{\"\\n\"}{end}' | awk '{print \"STARTING_CSV=\" USD1 \" CHANNEL=\" USD2 }' | sort -Vr | head -1",
"STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2",
"oc apply -f container-security-operator.yaml",
"subscription.operators.coreos.com/container-security-operator created",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace mynamespace sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries",
"oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com",
"customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/pod-vulnerability-scan
|
Chapter 6. Tuned [tuned.openshift.io/v1]
|
Chapter 6. Tuned [tuned.openshift.io/v1] Description Tuned is a collection of rules that allows cluster-wide deployment of node-level sysctls and more flexibility to add custom tuning specified by user needs. These rules are translated and passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The responsibility for applying the node-level tuning then lies with the containerized Tuned daemons. More info: https://github.com/openshift/cluster-node-tuning-operator Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status object TunedStatus is the status for a Tuned resource. 6.1.1. .spec Description spec is the specification of the desired behavior of Tuned. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status Type object Property Type Description managementState string managementState indicates whether the registry instance represented by this config instance is under operator management or not. Valid values are Force, Managed, Unmanaged, and Removed. profile array Tuned profiles. profile[] object A Tuned profile. recommend array Selection logic for all Tuned profiles. recommend[] object Selection logic for a single Tuned profile. 6.1.2. .spec.profile Description Tuned profiles. Type array 6.1.3. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 6.1.4. .spec.recommend Description Selection logic for all Tuned profiles. Type array 6.1.5. .spec.recommend[] Description Selection logic for a single Tuned profile. Type object Required priority profile Property Type Description machineConfigLabels object (string) MachineConfigLabels specifies the labels for a MachineConfig. The MachineConfig is created automatically to apply additional host settings (e.g. kernel boot parameters) profile 'Profile' needs and can only be applied by creating a MachineConfig. This involves finding all MachineConfigPools with machineConfigSelector matching the MachineConfigLabels and setting the profile 'Profile' on all nodes that match the MachineConfigPools' nodeSelectors. match array Rules governing application of a Tuned profile connected by logical OR operator. match[] object Rules governing application of a Tuned profile. operand object Optional operand configuration. priority integer Tuned profile priority. Highest priority is 0. profile string Name of the Tuned profile to recommend. 6.1.6. .spec.recommend[].match Description Rules governing application of a Tuned profile connected by logical OR operator. Type array 6.1.7. .spec.recommend[].match[] Description Rules governing application of a Tuned profile. Type object Required label Property Type Description label string Node or Pod label name. match array (undefined) Additional rules governing application of the tuned profile connected by logical AND operator. type string Match type: [node/pod]. If omitted, "node" is assumed. value string Node or Pod label value. If omitted, the presence of label name is enough to match. 6.1.8. .spec.recommend[].operand Description Optional operand configuration. Type object Property Type Description debug boolean turn debugging on/off for the TuneD daemon: true/false (default is false) tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf verbosity integer klog logging verbosity 6.1.9. .spec.recommend[].operand.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 6.1.10. .status Description TunedStatus is the status for a Tuned resource. Type object 6.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/tuneds GET : list objects of kind Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds DELETE : delete collection of Tuned GET : list objects of kind Tuned POST : create a Tuned /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} DELETE : delete a Tuned GET : read the specified Tuned PATCH : partially update the specified Tuned PUT : replace the specified Tuned 6.2.1. /apis/tuned.openshift.io/v1/tuneds HTTP method GET Description list objects of kind Tuned Table 6.1. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty 6.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds HTTP method DELETE Description delete collection of Tuned Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Tuned Table 6.3. HTTP responses HTTP code Reponse body 200 - OK TunedList schema 401 - Unauthorized Empty HTTP method POST Description create a Tuned Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body Tuned schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 202 - Accepted Tuned schema 401 - Unauthorized Empty 6.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/tuneds/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the Tuned HTTP method DELETE Description delete a Tuned Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Tuned Table 6.10. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Tuned Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Tuned Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body Tuned schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK Tuned schema 201 - Created Tuned schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/node_apis/tuned-tuned-openshift-io-v1
|
39.2. Examples for Using ipa migrate-ds
|
39.2. Examples for Using ipa migrate-ds The data migration is performed using the ipa migrate-ds command. At its simplest, the command takes the LDAP URL of the directory to migrate and exports the data based on common default settings. Migrated entries The migrate-ds command only migrates accounts containing a gidNumber attribute, that is required by the posixAccount object class, and a sn attribute, that is required by the person object class. Customizing the process The ipa migrate-ds command enables you to customize how data is identified and exported. This is useful if the original directory tree has a unique structure or if some entries or attributes within entries should be excluded. For further details, pass the --help to the command. Bind DN By default, the DN " cn=Directory Manager " is used to bind to the remote LDAP directory. Pass the --bind-dn option to the command to specify a custom bind DN. For further information, see Section 39.1.3.5, "Migration Tools" . Naming context changes If the Directory Server naming context differs from the one used in Identity Management, the base DNs for objects is transformed. For example: uid= user ,ou=people,dc=ldap,dc=example,dc=com is migrated to uid= user ,ou=people,dc=idm,dc=example,dc=com . Pass the --base-dn to the ipa migrate-ds command to set the base DN used on the remote LDAP server for the migration. 39.2.1. Migrating Specific Subtrees The default directory structure places person entries in the ou=People subtree and group entries in the ou=Groups subtree. These subtrees are container entries for those different types of directory data. If no options are passed with the migrate-ds command, then the utility assumes that the given LDAP directory uses the ou=People and ou=Groups structure. Many deployments may have an entirely different directory structure (or may only want to export certain parts of the directory tree). There are two options which allow administrators to specify the RDN of a different user or group subtree on the source LDAP server: --user-container --group-container Note In both cases, the subtree must be the RDN only and must be relative to the base DN. For example, the >ou=Employees,dc=example,dc=com directory tree can be migrated using --user-container=ou=Employees . For example: Pass the --scope option to the ipa migrate-ds command, to set a scope: onelevel : Default. Only entries in the specified container are migrated. subtree : Entries in the specified container and all subcontainers are migrated. base : Only the specified object itself is migrated. 39.2.2. Specifically Including or Excluding Entries By default, the ipa migrate-ds script imports every user entry with the person object class and every group entry with the groupOfUniqueNames or groupOfNames object class.. In some migration paths, only specific types of users and groups may need to be exported, or, conversely, specific users and groups may need to be excluded. One option is to set positively which types of users and groups to include. This is done by setting which object classes to search for when looking for user or group entries. This is a really useful option when there are custom object classes used in an environment for different user types. For example, this migrates only users with the custom fullTimeEmployee object class: Because of the different types of groups, this is also very useful for migrating only certain types of groups (such as user groups) while excluding other types of groups, like certificate groups. For example: Positively specifying user and groups to migrate based on object class implicitly excludes all other users and groups from migration. Alternatively, it can be useful to migrate all user and group entries except for just a small handful of entries. Specific user or group accounts can be excluded while all others of that type are migrated. For example, this excludes a hobbies group and two users: Exclude statements are applied to users matching the pattern in the uid and to groups matching it in the cn attribute. Specifying an object class to migrate can be used together with excluding specific entries. For example, this specifically includes users with the fullTimeEmployee object class, yet excludes three managers: 39.2.3. Excluding Entry Attributes By default, every attribute and object class for a user or group entry is migrated. There are some cases where that may not be realistic, either because of bandwidth and network constraints or because the attribute data are no longer relevant. For example, if users are going to be assigned new user certificates as they join the IdM domain, then there is no reason to migrate the userCertificate attribute. Specific object classes and attributes can be ignored by the migrate-ds by using any of several different options: --user-ignore-objectclass --user-ignore-attribute --group-ignore-objectclass --group-ignore-attribute For example, to exclude the userCertificate attribute and strongAuthenticationUser object class for users and the groupOfCertificates object class for groups: Note Make sure not to ignore any required attributes. Also, when excluding object classes, make sure to exclude any attributes which are only supported by that object class. 39.2.4. Setting the Schema to Use Identity Management uses the RFC2307bis schema to define user, host, host group, and other network identities. However, if the LDAP server used as source for a migration uses the RFC2307 schema instead, pass the --schema option to the ipa migrate-ds command:
|
[
"ipa migrate-ds ldap://ldap.example.com:389",
"ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389",
"ipa migrate-ds --group-objectclass=groupOfNames --group-objectclass=groupOfUniqueNames ldap://ldap.example.com:389",
"ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=jsmith --exclude-users=bjensen ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith --exclude-users=bjensen --exclude-users=mreynolds ldap://ldap.example.com:389",
"ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389",
"ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/using-migrate-ds
|
Chapter 16. External DNS Operator
|
Chapter 16. External DNS Operator 16.1. External DNS Operator in OpenShift Container Platform The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. 16.1.1. External DNS Operator The External DNS Operator implements the External DNS API from the olm.openshift.io API group. The External DNS Operator deploys the ExternalDNS using a deployment resource. The ExternalDNS deployment watches the resources such as services and routes in the cluster and updates the external DNS providers. Procedure You can deploy the ExternalDNS Operator on demand from the OperatorHub, this creates a Subscription object. Check the name of an install plan: USD oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' Example output install-zcvlr Check the status of an install plan, the status of an install plan must be Complete : USD oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase' Example output Complete Use the oc get command to view the Deployment status: USD oc get -n external-dns-operator deployment/external-dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h 16.1.2. External DNS Operator logs You can view External DNS Operator logs by using the oc logs command. Procedure View the logs of the External DNS Operator: USD oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator 16.1.2.1. External DNS Operator domain name limitations External DNS Operator uses the TXT registry, which follows the new format and adds the prefix for the TXT records. This reduces the maximum length of the domain name for the TXT records. A DNS record cannot be present without a corresponding TXT record, so the domain name of the DNS record must follow the same limit as the TXT records. For example, DNS record is <domain-name-from-source> and the TXT record is external-dns-<record-type>-<domain-name-from-source> . The domain name of the DNS records generated by External DNS Operator has the following limitations: Record type Number of characters CNAME 44 Wildcard CNAME records on AzureDNS 42 A 48 Wildcard A records on AzureDNS 46 If the domain name generated by External DNS exceeds the domain name limitation, the External DNS instance gives the following error: USD oc -n external-dns-operator logs external-dns-aws-7ddbd9c7f8-2jqjh 1 1 The external-dns-aws-7ddbd9c7f8-2jqjh parameter specifies the name of the External DNS pod. Example output time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE external-dns-cname-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE external-dns-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=info msg="Desired change: CREATE hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io A [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=error msg="Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=error msg="InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\n\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6" 16.2. Installing External DNS Operator on cloud providers You can install External DNS Operator on cloud providers such as AWS, Azure and GCP. 16.2.1. Installing the External DNS Operator You can install the External DNS Operator using the OpenShift Container Platform OperatorHub. Procedure Click Operators OperatorHub in the OpenShift Container Platform Web Console. Click External DNS Operator . You can use the Filter by keyword text box or the filter list to search for External DNS Operator from the list of Operators. Select the external-dns-operator namespace. On the External DNS Operator page, click Install . On the Install Operator page, ensure that you selected the following options: Update the channel as stable-v1.0 . Installation mode as A specific name on the cluster . Installed namespace as external-dns-operator . If namespace external-dns-operator does not exist, it gets created during the Operator installation. Select Approval Strategy as Automatic or Manual . Approval Strategy is set to Automatic by default. Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Verification Verify that External DNS Operator shows the Status as Succeeded on the Installed Operators dashboard. 16.3. External DNS Operator configuration parameters The External DNS Operators includes the following configuration parameters: 16.3.1. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters: Parameter Description spec Enables the type of a cloud provider. spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2 1 Defines available options such as AWS, GCP and Azure. 2 Defines a name of the secret which contains credentials for your cloud provider. zones Enables you to specify DNS zones by their domains. If you do not specify zones, ExternalDNS discovers all the zones present in your cloud provider account. zones: - "myzoneid" 1 1 Specifies the IDs of DNS zones. domains Enables you to specify AWS zones by their domains. If you do not specify domains, ExternalDNS discovers all the zones present in your cloud provider account. domains: - filterType: Include 1 matchType: Exact 2 name: "myzonedomain1.com" 3 - filterType: Include matchType: Pattern 4 pattern: ".*\\.otherzonedomain\\.com" 5 1 Instructs ExternalDNS to include the domain specified. 2 Instructs ExtrnalDNS that the domain matching has to be exact as opposed to regular expression match. 3 Defines the exact domain name by which ExternalDNS filters. 4 Sets regex-domain-filter flag in ExternalDNS . You can limit possible domains by using a Regex filter. 5 Defines the regex pattern to be used by ExternalDNS to filter the domains of the target zones. source Enables you to specify the source for the DNS records, Service or Route . source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: "yes" hostnameAnnotation: "Allow" 5 fqdnTemplate: - "{{.Name}}.myzonedomain.com" 6 1 Defines the settings for the source of DNS records. 2 The ExternalDNS uses Service type as source for creating dns records. 3 Sets service-type-filter flag in ExternalDNS . The serviceType contains the following fields: default : LoadBalancer expected : ClusterIP NodePort LoadBalancer ExternalName 4 Ensures that the controller considers only those resources which matches with label filter. 5 The default value for hostnameAnnotation is Ignore which instructs ExternalDNS to generate DNS records using the templates specified in the field fqdnTemplates . When the value is Allow the DNS records get generated based on the value specified in the external-dns.alpha.kubernetes.io/hostname annotation. 6 External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source. source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: "yes" 1 ExternalDNS` uses type route as source for creating dns records. 2 If the source is OpenShiftRoute , then you can pass the Ingress Controller name. The ExternalDNS uses canonical name of Ingress Controller as the target for CNAME record. 16.4. Creating DNS records on AWS You can create DNS records on AWS and AWS GovCloud by using External DNS Operator. 16.4.1. Creating DNS records on an public hosted zone for AWS by using Red Hat External DNS Operator You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from aws-creds secret present in kube-system namespace. USD export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) USD export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support Example output HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5 Create ExternalDNS resource for route source: USD cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF 1 Defines the name of external DNS resource. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the AWS Route53 DNS provider. 6 Defines options for the source of DNS records. 7 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. 8 If the source is OpenShiftRoute , then you can pass the OpenShift Ingress Controller name. External DNS Operator selects the canonical hostname of that router as the target while creating CNAME record. Check the records created for OCP routes using the following command: USD aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console 16.5. Creating DNS records on Azure You can create DNS records on Azure using External DNS Operator. 16.5.1. Creating DNS records on an public DNS zone for Azure by using Red Hat External DNS Operator You can create DNS records on a public DNS zone for Azure by using Red Hat External DNS Operator. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from azure-credentials secret present in kube-system namespace. USD CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) USD CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) USD RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) USD SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) USD TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d) Login to azure with base64 decoded values: USD az login --service-principal -u "USD{CLIENT_ID}" -p "USD{CLIENT_SECRET}" --tenant "USD{TENANT_ID}" Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD az network dns zone list --resource-group "USD{RESOURCE_GROUP}" Create ExternalDNS resource for route source: apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - "/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 EOF 1 Specifies the name of External DNS CR. 2 Define the zone ID. 3 Defines the Azure DNS provider. 4 You can define options for the source of DNS records. 5 If the source is OpenShiftRoute then you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 6 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. Check the records created for OCP routes using the following command: USD az network dns record-set list -g "USD{RESOURCE_GROUP}" -z test.azure.example.com | grep console Note To create records on private hosted zones on private Azure dns, you need to specify the private zone under zones which populates the provider type to azure-private-dns in the ExternalDNS container args. 16.6. Creating DNS records on GCP You can create DNS records on GCP using External DNS Operator. 16.6.1. Creating DNS records on an public managed zone for GCP by using Red Hat External DNS Operator You can create DNS records on a public managed zone for GCP by using Red Hat External DNS Operator. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Copy the value of service_account.json in gcp-credentials secret in a file encoded-gcloud.json by running the following command: USD oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data "service_account.json"}}{{USDv}}' | base64 -d - > decoded-gcloud.json Export Google credentials: USD export GOOGLE_CREDENTIALS=decoded-gcloud.json Activate your account by using the following command: USD gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json Set your project: USD gcloud config set project <project_id as per decoded-gcloud.json> Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None Get the list of managed zones to find the zone which corresponds to the previously found route's domain: USD gcloud dns managed-zones list | grep test.gcp.example.com qe-cvs4g-private-zone test.gcp.example.com Create ExternalDNS resource for route source: apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 EOF 1 Specifies the name of External DNS CR. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines Google Cloud DNS provider. 6 You can define options for the source of DNS records. 7 If the source is OpenShiftRoute then you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 8 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. Check the records created for OCP routes using the following command: USD gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console 16.7. Creating DNS records on Infoblox You can create DNS records on Infoblox using the Red Hat External DNS Operator. 16.7.1. Creating DNS records on a public DNS zone on Infoblox You can create DNS records on a public DNS zone on Infoblox by using the Red Hat External DNS Operator. Prerequisites You have access to the OpenShift CLI ( oc ). You have access to the Infoblox UI. Procedure Create a secret object with Infoblox credentials by running the following command: USD oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password> Get the routes objects to check your cluster domain by running the following command: USD oc get routes --all-namespaces | grep console Example Output openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None Create an ExternalDNS resource YAML file, for example, sample-infoblox.yaml, as follows: apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox spec: provider: type: Infoblox infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: "2.3.1" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute openshiftRouteOptions: routerName: default Create an ExternalDNS resource on Infoblox by running the following command: USD oc create -f sample-infoblox.yaml From the Infoblox UI, check the DNS records created for console routes: Click Data Management DNS Zones . Select the zone name. 16.8. Configuring the cluster-wide proxy on the External DNS Operator You can configure the cluster-wide proxy in the External DNS Operator. After configuring the cluster-wide proxy in the External DNS Operator, Operator Lifecycle Manager (OLM) automatically updates all the deployments of the Operators with the environment variables such as HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . 16.8.1. Configuring the External DNS Operator to trust the certificate authority of the cluster-wide proxy You can configure the External DNS Operator to trust the certificate authority of the cluster-wide proxy. Procedure Create the config map to contain the CA bundle in the external-dns-operator namespace by running the following command: USD oc -n external-dns-operator create configmap trusted-ca To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command: USD oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true Update the subscription of the External DNS Operator by running the following command: USD oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}]' Verification After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: USD oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME Example output trusted-ca
|
[
"oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'",
"install-zcvlr",
"oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"Complete",
"oc get -n external-dns-operator deployment/external-dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h",
"oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator",
"oc -n external-dns-operator logs external-dns-aws-7ddbd9c7f8-2jqjh 1",
"time=\"2022-09-02T08:53:57Z\" level=info msg=\"Desired change: CREATE external-dns-cname-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=info msg=\"Desired change: CREATE external-dns-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io TXT [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=info msg=\"Desired change: CREATE hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc.test.example.io A [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\\n\\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6\"",
"spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2",
"zones: - \"myzoneid\" 1",
"domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5",
"source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6",
"source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"",
"oc whoami",
"system:admin",
"export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None",
"aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF",
"aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console",
"oc whoami",
"system:admin",
"CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)",
"az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None",
"az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 EOF",
"az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console",
"oc whoami",
"system:admin",
"oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json",
"export GOOGLE_CREDENTIALS=decoded-gcloud.json",
"gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json",
"gcloud config set project <project_id as per decoded-gcloud.json>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None",
"gcloud dns managed-zones list | grep test.gcp.example.com qe-cvs4g-private-zone test.gcp.example.com",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 EOF",
"gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console",
"oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox spec: provider: type: Infoblox infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: \"2.3.1\" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute openshiftRouteOptions: routerName: default",
"oc create -f sample-infoblox.yaml",
"oc -n external-dns-operator create configmap trusted-ca",
"oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'",
"oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME",
"trusted-ca"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/external-dns-operator-1
|
Chapter 5. Load Balancing, Scheduling, and Migration
|
Chapter 5. Load Balancing, Scheduling, and Migration 5.1. Load Balancing, Scheduling, and Migration Individual hosts have finite hardware resources, and are susceptible to failure. To mitigate against failure and resource exhaustion, hosts are grouped into clusters, which are essentially a grouping of shared resources. A Red Hat Virtualization environment responds to changes in demand for host resources using load balancing policy, scheduling, and migration. The Manager is able to ensure that no single host in a cluster is responsible for all of the virtual machines in that cluster. Conversely, the Manager is able to recognize an underutilized host, and migrate all virtual machines off of it, allowing an administrator to shut down that host to save power. Available resources are checked as a result of three events: Virtual machine start - Resources are checked to determine on which host a virtual machine will start. Virtual machine migration - Resources are checked in order to determine an appropriate target host. Time elapses - Resources are checked at a regular interval to determine whether individual host load is in compliance with cluster load balancing policy. The Manager responds to changes in available resources by using the load balancing policy for a cluster to schedule the migration of virtual machines from one host in a cluster to another. The relationship between load balancing policy, scheduling, and virtual machine migration are discussed in the following sections.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/chap-load_balancing_scheduling_and_migration
|
Chapter 19. JMX Navigator
|
Chapter 19. JMX Navigator The JMX Navigator view, shown in Figure 19.1, "JMX Navigator view" , displays all processes that are running in your application and it drives all interactions with the monitoring and testing features. Other areas of the Fuse Integration perspective adapt to display information related to the node selected in the JMX Navigator view. In the JMX Navigator view, its context menu provides the commands needed to activate route tracing and to add JMS destinations. Figure 19.1. JMX Navigator view By default, the JMX Navigator view discovers all JMX servers running on the local machine and lists them under the following categories: Local Processes Server Connections User-Defined Connections Note You can add other JMX servers by using a server's JMX URL. For details, see Section 19.2, "Adding a JMX server" . 19.1. Viewing Processes in JMX Overview The JMX Navigator view lists all known processes in a series of trees. The root for each tree is a JMX server. The first tree in the list is a special Local Processes tree that contains all JMX servers that are running on the local machine. You must connect to one of the JMX servers to see the processes it contains. Viewing processes in a local JMX server To view information about processes in a local JMX server: In the JMX Navigator view, expand Local Processes . Under Local Processes , double-click one of the top-level entries to connect to it. Click the icon that appears to the entry to display a list of its components that are running in the JVM. Viewing processes in alternate JMX servers To view information about processes in an alternate JMX server: Section 19.2, "Adding a JMX server" the JMX server to the JMX Navigator view. In the JMX Navigator view, expand the server's entry by using the icon that appears to the entry. This displays a list of that JMX server's components that are running in the JVM. 19.2. Adding a JMX server Overview In the JMX Navigator view, under the Local Processes branch of the tree, you can see a list of all local JMX servers. You may need to connect to specific JMX servers to see components deployed on other machines. To add a JMX server, you must know the JMX URL of the server you want to add. Procedure To add a JMX server to the JMX Navigator view: In the JMX Navigator view, click New Connection . In the Create a new JMX connection wizard, select Default JMX Connection . Click . Select the Advanced tab. In the Name field, enter a name for the JMX server. The name can be any string. It is used to label the entry in the JMX Navigator tree. In the JMX URL field, enter the JMX URL of the server. If the JMX server requires authentication, enter your user name and password in the Username and Password fields. Click Finish . The new JMX server appears as a branch in the User-Defined Connections tree.
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/FIDEJMXExplore
|
Chapter 24. System roles
|
Chapter 24. System roles The following chapter contains the most notable changes to system roles between RHEL 8 and RHEL 9. 24.1. Performing system administration tasks with RHEL system roles As of Red Hat Enterprise Linux 9.0 General Availability (GA) release, RHEL system roles includes the ansible-core 2.12 package. This is a version of Ansible that has only the core functionality - that is, it does not include modules such as blivet for the storage role, gobject for the network role, and plugins such as json_query . With RHEL system roles, you can take advantage of a configuration interface to remotely manage multiple RHEL systems. As an option to the traditional RHEL system roles format, you can benefit from Ansible Collections , available in the Automation Hub only for Ansible Automation Platform Customers or via RPM package, available for RHEL users. RHEL system roles support Support for the following roles are available: The cockpit RHEL system role. You can automate the deployment and configuration of the web console and, thus, be able to manage your RHEL systems from a web browser. The firewall RHEL system role. The ha_cluster RHEL system role, formerly presented as a Technology Preview, is now fully supported. The gfs2 RHEL system role, which creates Red Hat Global File System 2 (GFS2) file systems in a Pacemaker cluster managed with the pcs command-line interface. Previously, setting up GFS2 file systems in a supported configuration required you to follow a long series of steps to configure the storage and cluster resources. The gfs2 role simplifies the process. Using the role, you can specify only the minimum information needed to configure GFS2 file systems in a RHEL high availability cluster. The nbde_client RHEL system role now supports servers with static IP addresses. The Microsoft SQL ( microsoft.sql.server ) role for Microsoft SQL Server. It simplifies and automates the configuration of RHEL with recommended settings for MSSQL Server workloads. Currently, the SQL Server does not support running on RHEL 9. You can only run the role on a RHEL 9 control node to manage the SQL Server on RHEL 7 and RHEL 8. The VPN RHEL system role, to configure VPN connections on RHEL systems by using Red Hat Ansible Automation Platform. Users can use it to set up host-to-host, network-to-network, VPN Remote Access Server, and Mesh configurations. The IPMI modules, to automate hardware management interfaces available in the rhel_mgmt Collection. The keylime_server , to configure and deploy the server components for Keylime Remote Attestation. To learn more about the RHEL system roles, see the documentation title Administration and configuration tasks using system roles in RHEL . Support for Ansible Engine 2.9 is no longer available in RHEL 9 Ansible Engine 2.9 is no longer available in Red Hat Enterprise Linux 9. Playbooks that previously ran on Ansible Engine 2.9 might generate error messages related to missing plugins or modules. If your use case for Ansible falls outside of the limited scope of support for Ansible Core provided in RHEL, contact Red Hat to discuss the available offerings. RHEL system roles now uses Ansible Core As of the RHEL 9 General Availability release, Ansible Core is provided with a limited scope of support to enable RHEL supported automation use cases. Ansible Core is available in the AppStream repository for RHEL. For details on the scope of support, refer to Scope of support for the Ansible Core package included in the RHEL 9 AppStream . Note As of Red Hat Enterprise Linux 9.0, the scope of support for Ansible Core in the RHEL AppStream is limited to any Ansible playbooks, roles, and modules that are included with or generated by a Red Hat product, such as RHEL system roles. The deprecated --token option of the subscription-manager register command will stop working at the end of November 2024 The default entitlement server, subscription.rhsm.redhat.com , will no longer be allowing token-based authentication from the end of November 2024. As a result, the deprecated --token=<TOKEN> option of the subscription-manager register command will no longer be a supported authentication method. As a consequence, if you use subscription-manager register --token=<TOKEN> , the registration will fail with the following error message: To register your system, use other supported authorization methods, such as including paired options --username / --password OR --org / --activationkey with the subscription-manager register command. RHEL system roles can be used to manage multiple different versions of RHEL You can use RHEL system roles as a consistent interface to manage different versions of RHEL. This can help to ease the transition between major versions of RHEL. RHEL 8 moves to Maintenance Support phase After the RHEL 8.10 release, RHEL 8 moved to the Maintenance Support phase and will no longer receive new features. As a result, starting with RHEL 9.5, new features will only be available in RHEL 9. Therefore, to get access to the latest enhancements use RHEL 9 for your RHEL system role control nodes.
|
[
"Token authentication not supported by the entitlement server"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_system-roles_considerations-in-adopting-rhel-9
|
Power Monitoring
|
Power Monitoring OpenShift Container Platform 4.17 Configuring and using power monitoring for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/power_monitoring/index
|
8.94. libqb
|
8.94. libqb 8.94.1. RHBA-2013:1634 - libqb bug fix and enhancement update Updated libqb packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libqb packages provide a library with the primary purpose of providing high performance client server reusable features, such as high performance logging, tracing, inter-process communication, and polling. Note The libqb packages have been upgraded to upstream version 0.16.0, which provides a number of bug fixes and enhancements over the version, including a patch to fix a bug in the qb_log_from_external_source() function that caused the Pacemaker's policy engine to terminate unexpectedly. (BZ# 950403 ) Bug Fix BZ# 889299 Output of the Blackbox window manager did not contain logging information if the string's length or precision was specified. This affected usability of the Blackbox output for debugging purposes, specifically when used with the Pacemaker cluster resource manager. The problem was caused by bugs in the libqb's implementation of the strlcpy() and strlcat() functions and the code responsible for the Blackbox log formatting. This update corrects these bugs so the Blackbox output is now formatted as expected. Users of libqb are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libqb
|
Chapter 7. Uninstalling a cluster on Alibaba Cloud
|
Chapter 7. Uninstalling a cluster on Alibaba Cloud You can remove a cluster that you deployed to Alibaba Cloud. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
|
[
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_alibaba/uninstalling-cluster-alibaba
|
Chapter 15. Auditing and Events
|
Chapter 15. Auditing and Events Red Hat Single Sign-On provides a rich set of auditing capabilities. Every single login action can be recorded and stored in the database and reviewed in the Admin Console. All admin actions can also be recorded and reviewed. There is also a Listener SPI with which plugins can listen for these events and perform some action. Built-in listeners include a simple log file and the ability to send an email if an event occurs. 15.1. Login Events Login events occur for things like when a user logs in successfully, when somebody enters in a bad password, or when a user account is updated. Every single event that happens to a user can be recorded and viewed. By default, no events are stored or viewed in the Admin Console. Only error events are logged to the console and the server's log file. To start persisting you'll need to enable storage. Go to the Events left menu item and select the Config tab. Event Configuration To start storing events you'll need to turn the Save Events switch to on under the Login Events Settings . Save Events The Saved Types field allows you to specify which event types you want to store in the event store. The Clear events button allows you to delete all the events in the database. The Expiration field allows you to specify how long you want to keep events stored. Once you've enabled storage of login events and decided on your settings, don't forget to click the Save button on the bottom of this page. To view events, go to the Login Events tab. Login Events As you can see, there's a lot of information stored and, if you are storing every event, there are a lot of events stored for each login action. The Filter button on this page allows you to filter which events you are actually interested in. Login Event Filter In this screenshot, we're filtering only Login events. Clicking the Update button runs the filter. 15.1.1. Event Types Login events: Login - A user has logged in. Register - A user has registered. Logout - A user has logged out. Code to Token - An application/client has exchanged a code for a token. Refresh Token - An application/client has refreshed a token. Account events: Social Link - An account has been linked to a social provider. Remove Social Link - A social provider has been removed from an account. Update Email - The email address for an account has changed. Update Profile - The profile for an account has changed. Send Password Reset - A password reset email has been sent. Update Password - The password for an account has changed. Update TOTP - The TOTP settings for an account have changed. Remove TOTP - TOTP has been removed from an account. Send Verify Email - An email verification email has been sent. Verify Email - The email address for an account has been verified. For all events there is a corresponding error event. 15.1.2. Event Listener Event listeners listen for events and perform an action based on that event. There are two built-in listeners that come with Red Hat Single Sign-On: Logging Event Listener and Email Event Listener. The Logging Event Listener writes to a log file whenever an error event occurs and is enabled by default. Here's an example log message: This logging is very useful if you want to use a tool like Fail2Ban to detect if there is a hacker bot somewhere that is trying to guess user passwords. You can parse the log file for LOGIN_ERROR and pull out the IP Address. Then feed this information into Fail2Ban so that it can help prevent attacks. The Logging Event Listener logs events to the org.keycloak.events logger category. By default debug log events are not included in server logs. To include debug log events in server logs, edit the standalone.xml file and change the log level used by the Logging Event listener. Alternately, you can configure the log level for org.keycloak.events . For example, to change the log level add the following: <subsystem xmlns="urn:jboss:domain:logging:..."> ... <logger category="org.keycloak.events"> <level name="DEBUG"/> </logger> </subsystem> To change the log level used by the Logging Event listener, add the following: <subsystem xmlns="urn:jboss:domain:keycloak-server:..."> ... <spi name="eventsListener"> <provider name="jboss-logging" enabled="true"> <properties> <property name="success-level" value="info"/> <property name="error-level" value="error"/> </properties> </provider> </spi> </subsystem> Valid values for the log levels are debug , info , warn , error , and fatal . The Email Event Listener sends an email to the user's account when an event occurs. Currently, the Email Event Listener supports the following events: Login Error Update Password Update TOTP Remove TOTP To enable the Email Listener go to the Config tab and click on the Event Listeners field. This will show a drop down list box where you can select email. You can exclude one or more events by editing the standalone.xml , standalone-ha.xml , or domain.xml that comes with your distribution and adding for example: <spi name="eventsListener"> <provider name="email" enabled="true"> <properties> <property name="exclude-events" value="["UPDATE_TOTP","REMOVE_TOTP"]"/> </properties> </provider> </spi> You can also set up a maximum length of the Event detail stored in the database by editing standalone.xml , standalone-ha.xml , or domain.xml . This setting can be useful in case some field (e.g. redirect_uri) is very long. Here is an example of defining the maximum length.: <spi name="eventsStore"> <provider name="jpa" enabled="true"> <properties> <property name="max-detail-length" value="1000"/> </properties> </provider> </spi> See the Server Installation and Configuration Guide for more details on where the standalone.xml , standalone-ha.xml , or domain.xml file lives. 15.2. Admin Events Any action an admin performs within the admin console can be recorded for auditing purposes. The Admin Console performs administrative functions by invoking on the Red Hat Single Sign-On REST interface. Red Hat Single Sign-On audits these REST invocations. The resulting events can then be viewed in the Admin Console. To enable auditing of Admin actions, go to the Events left menu item and select the Config tab. Event Configuration In the Admin Events Settings section, turn on the Save Events switch. Admin Event Configuration The Include Representation switch will include any JSON document that is sent through the admin REST API. This allows you to view exactly what an admin has done, but can lead to a lot of information stored in the database. The Clear admin events button allows you to wipe out the current information stored. To view the admin events go to the Admin Events tab. Admin Events If the Details column has a Representation box, you can click on that to view the JSON that was sent with that operation. Admin Representation You can also filter for the events you are interested in by clicking the Filter button. Admin Event Filter
|
[
"11:36:09,965 WARN [org.keycloak.events] (default task-51) type=LOGIN_ERROR, realmId=master, clientId=myapp, userId=19aeb848-96fc-44f6-b0a3-59a17570d374, ipAddress=127.0.0.1, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8180/myapp, code_id=b669da14-cdbb-41d0-b055-0810a0334607, username=admin",
"<subsystem xmlns=\"urn:jboss:domain:logging:...\"> <logger category=\"org.keycloak.events\"> <level name=\"DEBUG\"/> </logger> </subsystem>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak-server:...\"> <spi name=\"eventsListener\"> <provider name=\"jboss-logging\" enabled=\"true\"> <properties> <property name=\"success-level\" value=\"info\"/> <property name=\"error-level\" value=\"error\"/> </properties> </provider> </spi> </subsystem>",
"<spi name=\"eventsListener\"> <provider name=\"email\" enabled=\"true\"> <properties> <property name=\"exclude-events\" value=\"["UPDATE_TOTP","REMOVE_TOTP"]\"/> </properties> </provider> </spi>",
"<spi name=\"eventsStore\"> <provider name=\"jpa\" enabled=\"true\"> <properties> <property name=\"max-detail-length\" value=\"1000\"/> </properties> </provider> </spi>"
] |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/auditing_and_events
|
Chapter 2. Downloading Red Hat Enterprise Linux
|
Chapter 2. Downloading Red Hat Enterprise Linux If you have a Red Hat subscription, you can download ISO image files of the Red Hat Enterprise Linux 7 installation DVD from the Red Hat Customer Portal. If you do not have a subscription, either purchase one or obtain a free evaluation subscription from the Software & Download Center at https://access.redhat.com/downloads/ . There are two basic types of installation media available for the AMD64 and Intel 64 (x86_64), ARM (Aarch64), and IBM Power Systems (ppc64) architectures: Binary DVD A full installation image that boots the installation program and performs the entire installation without additional package repositories. Note Binary DVDs are also available for IBM Z. They can be used to boot the installation program using a SCSI DVD drive or as installation sources. Boot.iso A minimal boot image that boots the installation program but requires access to additional package repositories. Red Hat does not provide the repository; you must create it using the full installation ISO image. Note Supplementary DVD images containing additional packages, such as the IBM Java Runtime Environment and additional virtualization drivers may be available, but they are beyond the scope of this document. If you have a subscription or evaluation subscription, follow these steps to obtain the Red Hat Enterprise Linux 7 ISO image files: Procedure 2.1. Downloading Red Hat Enterprise Linux ISO Images Visit the Customer Portal at https://access.redhat.com/home . If you are not logged in, click LOG IN on the right side of the page. Enter your account credentials when prompted. Click DOWNLOADS at the top of the page. Click Red Hat Enterprise Linux . Ensure that you select the appropriate Product Variant and Architecture for your installation target. By default, Red Hat Enterprise Linux Server and x86_64 are selected. If you are not sure which variant best suits your needs, see http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux . Additionally, a list of packages available for every variant is available in the Red Hat Enterprise Linux 7 Package Manifest . A list of available downloads is displayed; most notably, a minimal Boot ISO image and a full installation Binary DVD ISO image. These files are described above. Additional images can be available, such as preconfigured virtual machine images, which are beyond the scope of this document. Choose the image file that you want to use. You have two ways to download it from the Customer Portal: Click its name to begin downloading it to your computer using your web browser. Right-click the name and then click Copy Link Location or a similar menu item, the exact wording of which depends on the browser that you are using. This action copies the URL of the file to your clipboard, which allows you to use an alternative application to download the file to your computer. This approach is especially useful if your Internet connection is unstable: in that case, you browser might fail to download the whole file, and an attempt to resume the interrupted download process fails because the download link contains an authentication key which is only valid for a short time. Specialized applications such as curl can, however, be used to resume interrupted download attempts from the Customer Portal, which means that you need not download the whole file again and thus you save your time and bandwidth consumption. Procedure 2.2. Using curl to Download Installation Media Make sure the curl package is installed by running the following command as root: If your Linux distribution does not use yum , or if you do not use Linux at all, download the most appropriate software package from the curl web site . Open a terminal window, enter a suitable directory, and type the following command: Replace filename.iso with the ISO image name as displayed in the Customer Portal, such as rhel-server-7.0-x86_64-dvd.iso . This is important because the download link in the Customer Portal contains extra characters which curl would otherwise use in the downloaded file name, too. Then, keep the single quotation mark in front of the parameter, and replace copied_link_location with the link that you have copied from the Customer Portal; copy it again if you copied the commands above in the meantime. Note that in Linux, you can paste the content of the clipboard into the terminal window by middle-clicking anywhere in the window, or by pressing Shift + Insert . Finally, use another single quotation mark after the last parameter, and press Enter to run the command and start transferring the ISO image. The single quotation marks prevent the command line interpreter from misinterpreting any special characters that might be included in the download link. Example 2.1. Downloading an ISO image with curl The following is an example of a curl command line: Note that the actual download link is much longer because it contains complicated identifiers. If your Internet connection does drop before the transfer is complete, refresh the download page in the Customer Portal; log in again if necessary. Copy the new download link, use the same basic curl command line parameters as earlier but be sure to use the new download link, and add -C - to instruct curl to automatically determine where it should continue based on the size of the already downloaded file. Example 2.2. Resuming an interrupted download attempt The following is an example of a curl command line that you use if you have only partially downloaded the ISO image of your choice: Optionally, you can use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes. All downloads on the Download Red Hat Enterprise Linux page are provided with their checksums for reference: Similar tools are available for Microsoft Windows and Mac OS X . You can also use the installation program to verify the media when starting the installation; see Section 23.2.2, "Verifying Boot Media" for details. After you have downloaded an ISO image file from the Customer Portal, you can: Burn it to a CD or DVD as described in Section 3.1, "Making an Installation CD or DVD" . Use it to create a bootable USB drive; see Section 3.2, "Making Installation USB Media" . Place it on a server to prepare for a network installation. For specific directions, see Section 3.3.3, "Installation Source on a Network" . Place it on a hard drive to use the drive as an installation source. For specific instructions, see Section 3.3.2, "Installation Source on a Hard Drive" . Use it to prepare a Preboot Execution Environment (PXE) server, which allows you to boot the installation system over a network. See Chapter 24, Preparing for a Network Installation for instructions.
|
[
"yum install curl",
"curl -o filename.iso ' copied_link_location '",
"curl -o rhel-server-7.0-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-7.0-x86_64-dvd.iso?_auth_=141...7bf'",
"curl -o rhel-server-7.0-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-7.0-x86_64-dvd.iso?_auth_=141...963' -C -",
"sha256sum rhel-server-7.0-x86_64-dvd.iso 85a...46c rhel-server-7.0-x86_64-dvd.iso"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-download-red-hat-enterprise-linux
|
Chapter 4. External storage services
|
Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/external-storage-services_rhodf
|
2.5. Additional Resources
|
2.5. Additional Resources For more information on how to use systemd and related tools to manage system resources on Red Hat Enterprise Linux, see the sources listed below: Installed Documentation Man Pages of Cgroup-Related Systemd Tools systemd-run (1) - The manual page lists all command-line options of the systemd-run utility. systemctl (1) - The manual page of the systemctl utility that lists available options and commands. systemd-cgls (1) - This manual page lists all command-line options of the systemd-cgls utility. systemd-cgtop (1) - The manual page contains the list of all command-line options of the systemd-cgtop utility. machinectl (1) - This manual page lists all command-line options of the machinectl utility. systemd.kill (5) - This manual page provides an overview of kill configuration options for system units. Controller-Specific Kernel Documentation The kernel-doc package provides detailed documentation of all resource controllers. This package is included in the Optional subscription channel. Before subscribing to the Optional channel, see the Scope of Coverage Details channel, then follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on Red Hat Customer Portal. To install kernel-doc from the Optional channel, type as root : After the installation, the following files will appear under the /usr/share/doc/kernel-doc- <kernel_version> /Documentation/cgroups/ directory: blkio subsystem - blkio-controller.txt cpuacct subsystem - cpuacct.txt cpuset subsystem - cpusets.txt devices subsystem - devices.txt freezer subsystem - freezer-subsystem.txt memory subsystem - memory.txt net_cls subsystem - net_cls.txt Additionally, see the following files on further information about the cpu subsystem: Real-Time scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-rt-group.txt CFS scheduling - /usr/share/doc/kernel-doc- <kernel_version> /Documentation/scheduler/sched-bwc.txt Online Documentation Red Hat Enterprise Linux 7 System Administrators Guide - The System Administrator's Guide documents relevant information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 7. It is oriented towards system administrators with a basic understanding of the system. The D-Bus API of systemd - The reference for D-Bus API commands for accessing systemd .
|
[
"~]# yum install kernel-doc"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-Using_Control_Groups-Additional_Resources
|
Chapter 13. Installing Fuse on Red Hat JBoss Enterprise Application Platform
|
Chapter 13. Installing Fuse on Red Hat JBoss Enterprise Application Platform Install Red Hat Fuse 7.12 on Red Hat JBoss EAP 7.4 to integrate it with Red Hat Decision Manager. Prerequisites A Red Hat Decision Manager installation on Red Hat JBoss Enterprise Application Platform 7.4 is available. For installation instructions, see Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . A separate instance of Red Hat JBoss Enterprise Application Platform 7.4 is available. Procedure Install Red Hat Fuse 7.12 on Red Hat JBoss Enterprise Application Platform 7.4. For installation instructions, see the Installing on JBoss EAP section in Red Hat Fuse documentation. Open the pom.xml file in the Fuse home directory in a text editor. Create the integration project with a dependency on the kie-camel component by editing the pom.xml file as shown in the following example: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <exclusions> <exclusion> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <exclusions> <exclusion> <groupId>org.jboss.spec.javax.xml.bind</groupId> <artifactId>jboss-jaxb-api_2.3_spec</artifactId> </exclusion> <exclusion> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-camel</artifactId> <exclusions> <exclusion> <groupId>org.apache.cxf</groupId> <artifactId>cxf-core</artifactId> </exclusion> <exclusion> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf</artifactId> </exclusion> <exclusion> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf-transport</artifactId> </exclusion> <exclusion> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <exclusions> <exclusion> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.0_spec</artifactId> </exclusion> </exclusions> </dependency>
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <exclusions> <exclusion> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <exclusions> <exclusion> <groupId>org.jboss.spec.javax.xml.bind</groupId> <artifactId>jboss-jaxb-api_2.3_spec</artifactId> </exclusion> <exclusion> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-camel</artifactId> <exclusions> <exclusion> <groupId>org.apache.cxf</groupId> <artifactId>cxf-core</artifactId> </exclusion> <exclusion> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf</artifactId> </exclusion> <exclusion> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf-transport</artifactId> </exclusion> <exclusion> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <exclusions> <exclusion> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.0_spec</artifactId> </exclusion> </exclusions> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/installing-on-fuse-eap-proc
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.