title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
โŒ€
url
stringlengths
79
342
Chapter 5. BuildRequest [build.openshift.io/v1]
Chapter 5. BuildRequest [build.openshift.io/v1] Description BuildRequest is the resource used to pass parameters to build generator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. dockerStrategyOptions object DockerStrategyOptions contains extra strategy options for container image builds env array (EnvVar) env contains additional environment variables you want to pass into a builder container. from ObjectReference from is the reference to the ImageStreamTag that triggered the build. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lastVersion integer lastVersion (optional) is the LastVersion of the BuildConfig that was used to generate the build. If the BuildConfig in the generator doesn't match, a build will not be generated. metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision object SourceRevision is the revision or commit information from the source for the build sourceStrategyOptions object SourceStrategyOptions contains extra strategy options for Source builds triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. triggeredByImage ObjectReference triggeredByImage is the Image that triggered this build. 5.1.1. .binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 5.1.2. .dockerStrategyOptions Description DockerStrategyOptions contains extra strategy options for container image builds Type object Property Type Description buildArgs array (EnvVar) Args contains any build arguments that are to be passed to Docker. See https://docs.docker.com/engine/reference/builder/#/arg for more details noCache boolean noCache overrides the docker-strategy noCache option in the build config 5.1.3. .revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.4. .revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.5. .revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.6. .revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.7. .sourceStrategyOptions Description SourceStrategyOptions contains extra strategy options for Source builds Type object Property Type Description incremental boolean incremental overrides the source-strategy incremental option in the build config 5.1.8. .triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 5.1.9. .triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 5.1.10. .triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.11. .triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.12. .triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.13. .triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.14. .triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.15. .triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.16. .triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.17. .triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.18. .triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.19. .triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.20. .triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.21. .triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.22. .triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.23. .triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.24. .triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.25. .triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.26. .triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.27. .triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.28. .triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.29. .triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.30. .triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 5.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone POST : create clone of a Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate POST : create instantiate of a BuildConfig 5.2.1. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone Table 5.1. Global path parameters Parameter Type Description name string name of the BuildRequest Table 5.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create clone of a Build Table 5.3. Body parameters Parameter Type Description body BuildRequest schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK BuildRequest schema 201 - Created BuildRequest schema 202 - Accepted BuildRequest schema 401 - Unauthorized Empty 5.2.2. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate Table 5.5. Global path parameters Parameter Type Description name string name of the BuildRequest Table 5.6. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create instantiate of a BuildConfig Table 5.7. Body parameters Parameter Type Description body BuildRequest schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/buildrequest-build-openshift-io-v1
18.4. Single File Cache Store
18.4. Single File Cache Store Red Hat JBoss Data Grid includes one file system based cache store: the SingleFileCacheStore . The SingleFileCacheStore is a simple, file system based implementation and a replacement to the older file system based cache store: the FileCacheStore . SingleFileCacheStore stores all key/value pairs and their corresponding metadata information in a single file. To speed up data location, it also keeps all keys and the positions of their values and metadata in memory. Hence, using the single file cache store slightly increases the memory required, depending on the key size and the amount of keys stored. Hence SingleFileCacheStore is not recommended for use cases where the keys are too big. To reduce memory consumption, the size of the cache store can be set to a fixed number of entries to store in the file. However, this works only when Infinispan is used as a cache. When Infinispan used this way, data which is not present in Infinispan can be recomputed or re-retrieved from the authoritative data store and stored in Infinispan cache. The reason for this limitation is because once the maximum number of entries is reached, older data in the cache store is removed, so if Infinispan was used as an authoritative data store, it would lead to data loss which is undesirable in this use case Due to its limitations, SingleFileCacheStore can be used in a limited capacity in production environments. It can not be used on shared file system (such as NFS and Windows shares) due to a lack of proper file locking, resulting in data corruption. Furthermore, file systems are not inherently transactional, resulting in file writing failures during the commit phase if the cache is used in a transactional context. Report a bug 18.4.1. Single File Store Configuration (Remote Client-Server Mode) The following is an example of a Single File Store configuration for Red Hat JBoss Data Grid's Remote Client-Server mode: For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.4.2. Single File Store Configuration (Library Mode) In Red Hat JBoss Grid's Library mode, configure a Single File Cache Store as follows:. For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.4.3. Upgrade JBoss Data Grid Cache Stores Red Hat JBoss Data Grid stores data in a different format than versions of JBoss Data Grid. As a result, the newer version of JBoss Data Grid cannot read data stored by older versions. Use rolling upgrades to upgrade persisted data from the format used by the old JBoss Data Grid to the new format. Additionally, the newer version of JBoss Data Grid also stores persistence configuration information in a different location. Rolling upgrades is the process by which a JBoss Data Grid installation is upgraded without a service shutdown. In Library mode, it refers to a node installation where JBoss Data Grid is running in Library mode. For JBoss Data Grid servers, it refers to the server side components. The upgrade can be due to either hardware or software change such as upgrading JBoss Data Grid. Rolling upgrades are only available in JBoss Data Grid's Remote Client-Server mode. Report a bug
[ "<local-cache name=\"default\" statistics=\"true\"> <file-store name=\"myFileStore\" passivation=\"true\" purge=\"true\" relative-to=\"{PATH}\" path=\"{DIRECTORY}\" max-entries=\"10000\" fetch-state=\"true\" preload=\"false\" /> </local-cache>", "<namedCache name=\"writeThroughToFile\"> <persistence passivation=\"false\"> <singleFile fetchPersistentState=\"true\" ignoreModifications=\"false\" purgeOnStartup=\"false\" shared=\"false\" preload=\"false\" location=\"/tmp/Another-FileCacheStore-Location\" maxEntries=\"100\" maxKeysInMemory=\"100\"> <async enabled=\"true\" threadPoolSize=\"500\" flushLockTimeout=\"1\" modificationQueueSize=\"1024\" shutdownTimeout=\"25000\"/> </singleFile> </persistence> </namedCache>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-single_file_cache_store
17.2. Running the Volume Top Command
17.2. Running the Volume Top Command The volume top command allows you to view the glusterFS bricks' performance metrics, including read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls. The volume top command displays up to 100 results. This section describes how to use the volume top command. 17.2.1. Viewing Open File Descriptor Count and Maximum File Descriptor Count You can view the current open file descriptor count and the list of files that are currently being accessed on the brick with the volume top command. The volume top command also displays the maximum open file descriptor count of files that are currently open, and the maximum number of files opened at any given point of time since the servers are up and running. If the brick name is not specified, then the open file descriptor metrics of all the bricks belonging to the volume displays. To view the open file descriptor count and the maximum file descriptor count, use the following command: # gluster volume top VOLNAME open [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the open file descriptor count and the maximum file descriptor count on brick server:/export on test-volume , and list the top 10 open calls: 17.2.2. Viewing Highest File Read Calls You can view a list of files with the highest file read calls on each brick with the volume top command. If the brick name is not specified, a list of 100 files are displayed by default. To view the highest read() calls, use the following command: # gluster volume top VOLNAME read [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the highest read calls on brick server:/export of test-volume : 17.2.3. Viewing Highest File Write Calls You can view a list of files with the highest file write calls on each brick with the volume top command. If the brick name is not specified, a list of 100 files displays by default. To view the highest write() calls, use the following command: # gluster volume top VOLNAME write [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the highest write calls on brick server:/export of test-volume : 17.2.4. Viewing Highest Open Calls on a Directory You can view a list of files with the highest open calls on the directories of each brick with the volume top command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays. To view the highest open() calls on each directory, use the following command: # gluster volume top VOLNAME opendir [brick BRICK-NAME ] [list-cnt cnt ] For example, to view the highest open calls on brick server:/export/ of test-volume : 17.2.5. Viewing Highest Read Calls on a Directory You can view a list of files with the highest directory read calls on each brick with the volume top command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays. To view the highest directory read() calls on each brick, use the following command: # gluster volume top VOLNAME readdir [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the highest directory read calls on brick server:/export/ of test-volume : 17.2.6. Viewing Read Performance You can view the read throughput of files on each brick with the volume top command. If the brick name is not specified, the metrics of all the bricks belonging to that volume is displayed. The output is the read throughput. This command initiates a read() call for the specified count and block size and measures the corresponding throughput directly on the back-end export, bypassing glusterFS processes. To view the read performance on each brick, use the command, specifying options as needed: # gluster volume top VOLNAME read-perf [bs blk-size count count ] [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the read performance on brick server:/export/ of test-volume , specifying a 256 block size, and list the top 10 results: 17.2.7. Viewing Write Performance You can view the write throughput of files on each brick or NFS server with the volume top command. If brick name is not specified, then the metrics of all the bricks belonging to that volume will be displayed. The output will be the write throughput. This command initiates a write operation for the specified count and block size and measures the corresponding throughput directly on back-end export, bypassing glusterFS processes. To view the write performance on each brick, use the following command, specifying options as needed: # gluster volume top VOLNAME write-perf [bs blk-size count count ] [nfs | brick BRICK-NAME ] [list-cnt cnt ] For example, to view the write performance on brick server:/export/ of test-volume , specifying a 256 block size, and list the top 10 results:
[ "gluster volume top test open brick server:/bricks/brick1/test list-cnt 10 Brick: server/bricks/brick1/test Current open fds: 2, Max open fds: 4, Max openfd time: 2020-10-09 05:57:20.171038 Count filename ======================= 2 /file222 1 /file1", "gluster volume top testvol_distributed-dispersed read brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 9 /user11/dir1/dir4/testfile2.txt 9 /user11/dir1/dir1/testfile0.txt 9 /user11/dir0/dir4/testfile4.txt 9 /user11/dir0/dir3/testfile4.txt 9 /user11/dir0/dir1/testfile0.txt 9 /user11/testfile4.txt 9 /user11/testfile3.txt 5 /user11/dir2/dir1/testfile4.txt 5 /user11/dir2/dir1/testfile0.txt 5 /user11/dir2/testfile2.txt", "gluster volume top testvol_distributed-dispersed write brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 8 /user12/dir4/dir4/testfile3.txt 8 /user12/dir4/dir3/testfile3.txt 8 /user2/dir4/dir3/testfile4.txt 8 /user3/dir4/dir4/testfile1.txt 8 /user12/dir4/dir2/testfile3.txt 8 /user2/dir4/dir1/testfile0.txt 8 /user11/dir4/dir3/testfile4.txt 8 /user3/dir4/dir2/testfile2.txt 8 /user12/dir4/dir0/testfile0.txt 8 /user11/dir4/dir3/testfile3.txt", "gluster volume top testvol_distributed-dispersed opendir brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 3 /user2/dir3/dir2 3 /user2/dir3/dir1 3 /user2/dir3/dir0 3 /user2/dir3 3 /user2/dir2/dir4 3 /user2/dir2/dir3 3 /user2/dir2/dir2 3 /user2/dir2/dir1 3 /user2/dir2/dir0 3 /user2/dir2", "gluster volume top testvol_distributed-dispersed readdir brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 4 /user6/dir2/dir3 4 /user6/dir1/dir4 4 /user6/dir1/dir2 4 /user6/dir1 4 /user6/dir0 4 /user13/dir1/dir1 4 /user3/dir4/dir4 4 /user3/dir3/dir4 4 /user3/dir3/dir3 4 /user3/dir3/dir1", "gluster volume top testvol_distributed-dispersed read-perf bs 256 count 1 brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Throughput 10.67 MBps time 0.0000 secs MBps Filename Time ==== ======== ==== 0 /user2/dir3/dir2/testfile3.txt 2021-02-01 15:47:35.391234 0 /user2/dir3/dir2/testfile0.txt 2021-02-01 15:47:35.371018 0 /user2/dir3/dir1/testfile4.txt 2021-02-01 15:47:33.375333 0 /user2/dir3/dir1/testfile0.txt 2021-02-01 15:47:31.859194 0 /user2/dir3/dir0/testfile2.txt 2021-02-01 15:47:31.749105 0 /user2/dir3/dir0/testfile1.txt 2021-02-01 15:47:31.728151 0 /user2/dir3/testfile4.txt 2021-02-01 15:47:31.296924 0 /user2/dir3/testfile3.txt 2021-02-01 15:47:30.988683 0 /user2/dir3/testfile0.txt 2021-02-01 15:47:30.557743 0 /user2/dir2/dir4/testfile4.txt 2021-02-01 15:47:30.464017", "gluster volume top testvol_distributed-dispersed write-perf bs 256 count 1 brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Throughput 3.88 MBps time 0.0001 secs MBps Filename Time ==== ======== ==== 0 /user12/dir4/dir4/testfile4.txt 2021-02-01 13:30:32.225628 0 /user12/dir4/dir4/testfile3.txt 2021-02-01 13:30:31.771095 0 /user12/dir4/dir4/testfile0.txt 2021-02-01 13:30:29.655447 0 /user12/dir4/dir3/testfile4.txt 2021-02-01 13:30:29.62920 0 /user12/dir4/dir3/testfile3.txt 2021-02-01 13:30:28.995407 0 /user2/dir4/dir4/testfile2.txt 2021-02-01 13:30:28.489318 0 /user2/dir4/dir4/testfile1.txt 2021-02-01 13:30:27.956523 0 /user2/dir4/dir3/testfile4.txt 2021-02-01 13:30:27.34337 0 /user12/dir4/dir3/testfile0.txt 2021-02-01 13:30:26.699984 0 /user3/dir4/dir4/testfile2.txt 2021-02-01 13:30:26.602165" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-running_the_volume_top_command
Chapter 1. Getting started with TuneD
Chapter 1. Getting started with TuneD As a system administrator, you can use the TuneD application to optimize the performance profile of your system for a variety of use cases. 1.1. The purpose of TuneD TuneD is a service that monitors your system and optimizes the performance under certain workloads. The core of TuneD are profiles , which tune your system for different use cases. TuneD is distributed with a number of predefined profiles for use cases such as: High throughput Low latency Saving power It is possible to modify the rules defined for each profile and customize how to tune a particular device. When you switch to another profile or deactivate TuneD , all changes made to the system settings by the profile revert back to their original state. You can also configure TuneD to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices. 1.2. TuneD profiles A detailed analysis of a system can be very time-consuming. TuneD provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles. The profiles provided with TuneD are divided into the following categories: Power-saving profiles Performance-boosting profiles The performance-boosting profiles include profiles that focus on the following aspects: Low latency for storage and network High throughput for storage and network Virtual machine performance Virtualization host performance Syntax of profile configuration The tuned.conf file can contain one [main] section and other sections for configuring plug-in instances. However, all sections are optional. Lines starting with the hash sign ( # ) are comments. Additional resources tuned.conf(5) man page on your system 1.3. The default TuneD profile During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules: Environment Default profile Goal Compute nodes throughput-performance The best throughput performance Virtual machines virtual-guest The best performance. If you are not interested in the best performance, you can change it to the balanced or powersave profile. Other cases balanced Balanced performance and power consumption Additional resources tuned.conf(5) man page on your system 1.4. Merged TuneD profiles As an experimental feature, it is possible to select more profiles at once. TuneD will try to merge them during the load. If there are conflicts, the settings from the last specified profile takes precedence. Example 1.1. Low power consumption in a virtual guest The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority: Warning Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile. Additional resources tuned-adm and tuned.conf(5) man pages on your system 1.5. The location of TuneD profiles TuneD stores profiles in the following directories: /usr/lib/tuned/ Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf , and optionally other files, for example helper scripts. /etc/tuned/ If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in /etc/tuned/ is used. Additional resources tuned.conf(5) man page on your system 1.6. TuneD profiles distributed with RHEL The following is a list of profiles that are installed with TuneD on Red Hat Enterprise Linux. Note There might be more product-specific or third-party TuneD profiles available. Such profiles are usually provided by separate RPM packages. balanced The default power-saving profile. It is intended to be a compromise between performance and power consumption. It uses auto-scaling and auto-tuning whenever possible. The only drawback is the increased latency. In the current TuneD release, it enables the CPU, disk, audio, and video plugins, and activates the conservative CPU governor. The radeon_powersave option uses the dpm-balanced value if it is supported, otherwise it is set to auto . It changes the energy_performance_preference attribute to the normal energy setting. It also changes the scaling_governor policy attribute to either the conservative or powersave CPU governor. powersave A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current TuneD release it enables USB autosuspend, WiFi power saving, and Aggressive Link Power Management (ALPM) power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the ondemand governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. If your system contains a supported Radeon graphics card with enabled KMS, the profile configures it to automatic power saving. On ASUS Eee PCs, a dynamic Super Hybrid Engine is enabled. It changes the energy_performance_preference attribute to the powersave or power energy setting. It also changes the scaling_governor policy attribute to either the ondemand or powersave CPU governor. Note In certain cases, the balanced profile is more efficient compared to the powersave profile. Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine might consume less energy if the transcoding is done on the full power, because the task is finished quickly, the machine starts to idle, and it can automatically step-down to very efficient power save modes. On the other hand, if you transcode the file with a throttled machine, the machine consumes less power during the transcoding, but the process takes longer and the overall consumed energy can be higher. That is why the balanced profile can be generally a better option. throughput-performance A server profile optimized for high throughput. It disables power savings mechanisms and enables sysctl settings that improve the throughput performance of the disk and network IO. CPU governor is set to performance . It changes the energy_performance_preference and scaling_governor attribute to the performance profile. accelerator-performance The accelerator-performance profile contains the same tuning as the throughput-performance profile. Additionally, it locks the CPU to low C states so that the latency is less than 100us. This improves the performance of certain accelerators, such as GPUs. latency-performance A server profile optimized for low latency. It disables power savings mechanisms and enables sysctl settings that improve latency. CPU governor is set to performance and the CPU is locked to the low C states (by PM QoS). It changes the energy_performance_preference and scaling_governor attribute to the performance profile. network-latency A profile for low latency network tuning. It is based on the latency-performance profile. It additionally disables transparent huge pages and NUMA balancing, and tunes several other network-related sysctl parameters. It inherits the latency-performance profile which changes the energy_performance_preference and scaling_governor attribute to the performance profile. hpc-compute A profile optimized for high-performance computing. It is based on the latency-performance profile. network-throughput A profile for throughput network tuning. It is based on the throughput-performance profile. It additionally increases kernel network buffers. It inherits either the latency-performance or throughput-performance profile, and changes the energy_performance_preference and scaling_governor attribute to the performance profile. virtual-guest A profile designed for Red Hat Enterprise Linux 9 virtual machines and VMWare guests based on the throughput-performance profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers. It inherits the throughput-performance profile and changes the energy_performance_preference and scaling_governor attribute to the performance profile. virtual-host A profile designed for virtual hosts based on the throughput-performance profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values, and enables a more aggressive value of dirty pages writeback. It inherits the throughput-performance profile and changes the energy_performance_preference and scaling_governor attribute to the performance profile. oracle A profile optimized for Oracle databases loads based on throughput-performance profile. It additionally disables transparent huge pages and modifies other performance-related kernel parameters. This profile is provided by the tuned-profiles-oracle package. desktop A profile optimized for desktops, based on the balanced profile. It additionally enables scheduler autogroups for better response of interactive applications. optimize-serial-console A profile that tunes down I/O activity to the serial console by reducing the printk value. This should make the serial console more responsive. This profile is intended to be used as an overlay on other profiles. For example: mssql A profile provided for Microsoft SQL Server. It is based on the throughput-performance profile. intel-sst A profile optimized for systems with user-defined Intel Speed Select Technology configurations. This profile is intended to be used as an overlay on other profiles. For example: 1.7. TuneD cpu-partitioning profile For tuning Red Hat Enterprise Linux 9 for latency-sensitive workloads, Red Hat recommends to use the cpu-partitioning TuneD profile. Prior to Red Hat Enterprise Linux 9, the low-latency Red Hat documentation described the numerous low-level steps needed to achieve low-latency tuning. In Red Hat Enterprise Linux 9, you can perform low-latency tuning more efficiently by using the cpu-partitioning TuneD profile. This profile is easily customizable according to the requirements for individual low-latency applications. The following figure is an example to demonstrate how to use the cpu-partitioning profile. This example uses the CPU and node layout. Figure 1.1. Figure cpu-partitioning You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the following configuration options: Isolated CPUs with load balancing In the cpu-partitioning figure, the blocks numbered from 4 to 23, are the default isolated CPUs. The kernel scheduler's process load balancing is enabled on these CPUs. It is designed for low-latency processes with multiple threads that need the kernel scheduler load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the isolated_cores=cpu-list option, which lists CPUs to isolate that will use the kernel scheduler load balancing. The list of isolated CPUs is comma-separated or you can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. Isolated CPUs without load balancing In the cpu-partitioning figure, the blocks numbered 2 and 3, are the isolated CPUs that do not provide any additional kernel scheduler process load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the no_balance_cores=cpu-list option, which lists CPUs to isolate that will not use the kernel scheduler load balancing. Specifying the no_balance_cores option is optional, however any CPUs in this list must be a subset of the CPUs listed in the isolated_cores list. Application threads using these CPUs need to be pinned individually to each CPU. Housekeeping CPUs Any CPU not isolated in the cpu-partitioning-variables.conf file is automatically considered a housekeeping CPU. On the housekeeping CPUs, all services, daemons, user processes, movable kernel threads, interrupt handlers, and kernel timers are permitted to execute. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system 1.8. Using the TuneD cpu-partitioning profile for low-latency tuning This procedure describes how to tune a system for low-latency using the TuneD's cpu-partitioning profile. It uses the example of a low-latency application that can use cpu-partitioning and the CPU layout as mentioned in the cpu-partitioning figure. The application in this case uses: One dedicated reader thread that reads data from the network will be pinned to CPU 2. A large number of threads that process this network data will be pinned to CPUs 4-23. A dedicated writer thread that writes the processed data to the network will be pinned to CPU 3. Prerequisites You have installed the cpu-partitioning TuneD profile by using the dnf install tuned-profiles-cpu-partitioning command as root. Procedure Edit the /etc/tuned/cpu-partitioning-variables.conf file with the following changes: Comment the isolated_cores=USD{f:calc_isolated_cores:1} line: Add the following information for isolated CPUS: Set the cpu-partitioning TuneD profile: Reboot the system. After rebooting, the system is tuned for low-latency, according to the isolation in the cpu-partitioning figure. The application can use taskset to pin the reader and writer threads to CPUs 2 and 3, and the remaining application threads on CPUs 4-23. Verification Verify that the isolated CPUs are not reflected in the Cpus_allowed_list field: To see affinity of all processes, enter: Note TuneD cannot change the affinity of some processes, mostly kernel processes. In this example, processes with PID 4 and 9 remain unchanged. Additional resources tuned-profiles-cpu-partitioning(7) man page 1.9. Customizing the cpu-partitioning TuneD profile You can extend the TuneD profile to make additional tuning changes. For example, the cpu-partitioning profile sets the CPUs to use cstate=1 . In order to use the cpu-partitioning profile but to additionally change the CPU cstate from cstate1 to cstate0, the following procedure describes a new TuneD profile named my_profile , which inherits the cpu-partitioning profile and then sets C state-0. Procedure Create the /etc/tuned/my_profile directory: Create a tuned.conf file in this directory, and add the following content: Use the new profile: Note In the shared example, a reboot is not required. However, if the changes in the my_profile profile require a reboot to take effect, then reboot your machine. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system 1.10. Real-time TuneD profiles distributed with RHEL Real-time profiles are intended for systems running the real-time kernel. Without a special kernel build, they do not configure the system to be real-time. On RHEL, the profiles are available from additional repositories. The following real-time profiles are available: realtime Use on bare-metal real-time systems. Provided by the tuned-profiles-realtime package, which is available from the RT or NFV repositories. realtime-virtual-host Use in a virtualization host configured for real-time. Provided by the tuned-profiles-nfv-host package, which is available from the NFV repository. realtime-virtual-guest Use in a virtualization guest configured for real-time. Provided by the tuned-profiles-nfv-guest package, which is available from the NFV repository. 1.11. Static and dynamic tuning in TuneD Understanding the difference between the two categories of system tuning that TuneD applies, static and dynamic , is important when determining which one to use for a given situation or purpose. Static tuning Mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools such as ethtool . Dynamic tuning Watches how various system components are used throughout the uptime of your system. TuneD adjusts system settings dynamically based on that monitoring information. For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. TuneD monitors the activity of these components and reacts to the changes in their use. By default, dynamic tuning is disabled. To enable it, edit the /etc/tuned/tuned-main.conf file and change the dynamic_tuning option to 1 . TuneD then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use the update_interval option. Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the TuneD profiles. Example 1.2. Static and dynamic tuning on a workstation On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded. For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. TuneD has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage. If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, TuneD detects this and sets the interface speed to maximum to offer the best performance while the activity level is high. This principle is used for other plug-ins for CPU and disks as well. 1.12. TuneD no-daemon mode You can run TuneD in no-daemon mode, which does not require any resident memory. In this mode, TuneD applies the settings and exits. By default, no-daemon mode is disabled because a lot of TuneD functionality is missing in this mode, including: D-Bus support Hot-plug support Rollback support for settings To enable no-daemon mode, include the following line in the /etc/tuned/tuned-main.conf file: 1.13. Installing and enabling TuneD This procedure installs and enables the TuneD application, installs TuneD profiles, and presets a default TuneD profile for your system. Procedure Install the TuneD package: Enable and start the TuneD service: Optional: Install TuneD profiles for real-time systems: For the TuneD profiles for real-time systems enable rhel-9 repository. Install it. Verify that a TuneD profile is active and applied: Note The active profile TuneD automatically presets differs based on your machine type and system settings. 1.14. Listing available TuneD profiles This procedure lists all TuneD profiles that are currently available on your system. Procedure To list all available TuneD profiles on your system, use: USD tuned-adm list Available profiles: - accelerator-performance - Throughput performance based tuning with disabled higher latency STOP states - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case - latency-performance - Optimize for deterministic performance at the cost of increased power consumption - network-latency - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance - network-throughput - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks - powersave - Optimize for low power consumption - throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced To display only the currently active profile, use: Additional resources tuned-adm(8) man page on your system 1.15. Setting a TuneD profile This procedure activates a selected TuneD profile on your system. Prerequisites The TuneD service is running. See Installing and Enabling TuneD for details. Procedure Optional: You can let TuneD recommend the most suitable profile for your system: Activate a profile: Alternatively, you can activate a combination of multiple profiles: Example 1.3. A virtual machine optimized for low power consumption The following example optimizes the system to run in a virtual machine with the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority: View the current active TuneD profile on your system: Reboot the system: Verification Verify that the TuneD profile is active and applied: Additional resources tuned-adm(8) man page on your system 1.16. Using the TuneD D-Bus interface You can directly communicate with TuneD at runtime through the TuneD D-Bus interface to control a variety of TuneD services. You can use the busctl or dbus-send commands to access the D-Bus API. Note Although you can use either the busctl or dbus-send command, the busctl command is a part of systemd and, therefore, present on most hosts already. 1.16.1. Using the TuneD D-Bus interface to show available TuneD D-Bus API methods You can see the D-Bus API methods available to use with TuneD by using the TuneD D-Bus interface. Prerequisites The TuneD service is running. See Installing and Enabling TuneD for details. Procedure To see the available TuneD API methods, run: The output should look similar to the following: You can find descriptions of the different available methods in the TuneD upstream repository . 1.16.2. Using the TuneD D-Bus interface to change the active TuneD profile You can replace the active TuneD profile with your desired TuneD profile by using the TuneD D-Bus interface. Prerequisites The TuneD service is running. See Installing and Enabling TuneD for details. Procedure To change the active TuneD profile, run: Replace profile with the name of your desired profile. Verification To view the current active TuneD profile, run: 1.17. Disabling TuneD This procedure disables TuneD and resets all affected system settings to their original state before TuneD modified them. Procedure To disable all tunings temporarily: The tunings are applied again after the TuneD service restarts. Alternatively, to stop and disable the TuneD service permanently: Additional resources tuned-adm(8) man page on your system
[ "tuned-adm profile virtual-guest powersave", "tuned-adm profile throughput-performance optimize-serial-console", "tuned-adm profile cpu-partitioning intel-sst", "isolated_cores=USD{f:calc_isolated_cores:1}", "All isolated CPUs: isolated_cores=2-23 Isolated CPUs without the kernel's scheduler load balancing: no_balance_cores=2,3", "tuned-adm profile cpu-partitioning", "cat /proc/self/status | grep Cpu Cpus_allowed: 003 Cpus_allowed_list: 0-1", "ps -ae -o pid= | xargs -n 1 taskset -cp pid 1's current affinity list: 0,1 pid 2's current affinity list: 0,1 pid 3's current affinity list: 0,1 pid 4's current affinity list: 0-5 pid 5's current affinity list: 0,1 pid 6's current affinity list: 0,1 pid 7's current affinity list: 0,1 pid 9's current affinity list: 0", "mkdir /etc/tuned/ my_profile", "vi /etc/tuned/ my_profile /tuned.conf [main] summary=Customized tuning on top of cpu-partitioning include=cpu-partitioning [cpu] force_latency=cstate.id:0|1", "tuned-adm profile my_profile", "daemon = 0", "dnf install tuned", "systemctl enable --now tuned", "subscription-manager repos --enable=rhel-9-for-x86_64-nfv-beta-rpms", "dnf install tuned-profiles-realtime tuned-profiles-nfv", "tuned-adm active Current active profile: throughput-performance", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "tuned-adm list Available profiles: - accelerator-performance - Throughput performance based tuning with disabled higher latency STOP states - balanced - General non-specialized TuneD profile - desktop - Optimize for the desktop use-case - latency-performance - Optimize for deterministic performance at the cost of increased power consumption - network-latency - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance - network-throughput - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks - powersave - Optimize for low power consumption - throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: balanced", "tuned-adm active Current active profile: throughput-performance", "tuned-adm recommend throughput-performance", "tuned-adm profile selected-profile", "tuned-adm profile selected-profile1 selected-profile2", "tuned-adm profile virtual-guest powersave", "tuned-adm active Current active profile: selected-profile", "reboot", "tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.", "busctl introspect com.redhat.tuned /Tuned com.redhat.tuned.control", "NAME TYPE SIGNATURE RESULT/VALUE FLAGS .active_profile method - s - .auto_profile method - (bs) - .disable method - b - .get_all_plugins method - a{sa{ss}} - .get_plugin_documentation method s s - .get_plugin_hints method s a{ss} - .instance_acquire_devices method ss (bs) - .is_running method - b - .log_capture_finish method s s - .log_capture_start method ii s - .post_loaded_profile method - s - .profile_info method s (bsss) - .profile_mode method - (ss) - .profiles method - as - .profiles2 method - a(ss) - .recommend_profile method - s - .register_socket_signal_path method s b - .reload method - b - .start method - b - .stop method - b - .switch_profile method s (bs) - .verify_profile method - b - .verify_profile_ignore_missing method - b - .profile_changed signal sbs - -", "busctl call com.redhat.tuned /Tuned com.redhat.tuned.control switch_profile s profile (bs) true \"OK\"", "busctl call com.redhat.tuned /Tuned com.redhat.tuned.control active_profile s \" profile \"", "tuned-adm off", "systemctl disable --now tuned" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance
function::task_utime_tid
function::task_utime_tid Name function::task_utime_tid - User time of the given task Synopsis Arguments tid Thread id of the given task Description Returns the user time of the given task in cputime, or zero if the task doesn't exist. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task.
[ "task_utime_tid:long(tid:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-utime-tid
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Google cloud 4.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_google_cloud
Chapter 33. Configuring Microsoft SQL Server by using RHEL system roles
Chapter 33. Configuring Microsoft SQL Server by using RHEL system roles You can use the microsoft.sql.server Ansible system role to automate the installation and management of Microsoft SQL Server. This role also optimizes Red Hat Enterprise Linux (RHEL) to improve the performance and throughput of SQL Server by applying the mssql TuneD profile. Note During the installation, the role adds repositories for SQL Server and related packages to the managed hosts. Packages in these repositories are provided, maintained, and hosted by Microsoft. 33.1. Installing and configuring SQL Server with an existing TLS certificate by using the microsoft.sql.server Ansible system role If your application requires a Microsoft SQL Server database, you can configure SQL Server with TLS encryption to enable secure communication between the application and the database. By using the microsoft.sql.server Ansible system role, you can automate this process and remotely install and configure SQL Server with TLS encryption. In the playbook, you can use an existing private key and a TLS certificate that was issued by a certificate authority (CA). Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs: RHEL 7.9: SQL Server 2017 and 2019 RHEL 8: SQL Server 2017, 2019, and 2022 RHEL 9.4 and later: SQL Server 2022 Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You installed the ansible-collection-microsoft-sql package or the microsoft.sql collection on the control node. The managed node has 2 GB or more RAM installed. The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later. You stored the certificate in the sql_crt.pem file in the same directory as the playbook. You stored the private key in the sql_cert.key file in the same directory as the playbook. SQL clients trust the CA that issued the certificate. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: sa_pwd: <sa_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with an existing private key and certificate ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: "{{ sa_pwd }}" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_cert: sql_crt.pem mssql_tls_private_key: sql_cert.key mssql_tls_version: 1.2 mssql_tls_force: true The settings specified in the example playbook include the following: mssql_tls_enable: true Enables TLS encryption. If you enable this setting, you must also define mssql_tls_cert and mssql_tls_private_key . mssql_tls_cert: <path> Sets the path to the TLS certificate stored on the control node. The role copies this file to the /etc/pki/tls/certs/ directory on the managed node. mssql_tls_private_key: <path> Sets the path to the TLS private key on the control node. The role copies this file to the /etc/pki/tls/private/ directory on the managed node. mssql_tls_force: true Replaces the TLS certificate and private key in their destination directories if they exist. For details about all variables used in the playbook, see the /usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the SQL Server host, use the sqlcmd utility with the -N parameter to establish an encrypted connection to SQL server and run a query, for example: If the command succeeds, the connection to the server was TLS encrypted. Additional resources /usr/share/ansible/roles/microsoft.sql-server/README.md file Ansible vault 33.2. Installing and configuring SQL Server with a TLS certificate issued from IdM by using the microsoft.sql.server Ansible system role If your application requires a Microsoft SQL Server database, you can configure SQL Server with TLS encryption to enable secure communication between the application and the database. If the SQL Server host is a member in a Red Hat Enterprise Linux Identity Management (IdM) domain, the certmonger service can manage the certificate request and future renewals. By using the microsoft.sql.server Ansible system role, you can automate this process. You can remotely install and configure SQL Server with TLS encryption, and the microsoft.sql.server role uses the certificate Ansible system role to configure certmonger and request a certificate from IdM. Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs: RHEL 7.9: SQL Server 2017 and 2019 RHEL 8: SQL Server 2017, 2019, and 2022 RHEL 9.4 and later: SQL Server 2022 Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You installed the ansible-collection-microsoft-sql package or the microsoft.sql collection on the control node. The managed node has 2 GB or more RAM installed. The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later. You enrolled the managed node in a Red Hat Enterprise Linux Identity Management (IdM) domain. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: sa_pwd: <sa_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with certificates issued by Red Hat IdM ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: "{{ sa_pwd }}" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_certificates: - name: sql_cert dns: server.example.com ca: ipa The settings specified in the example playbook include the following: mssql_tls_enable: true Enables TLS encryption. If you enable this setting, you must also define mssql_tls_certificates . mssql_tls_certificates A list of YAML dictionaries with settings for the certificate role. name: <file_name> Defines the base name of the certificate and private key. The certificate role stores the certificate in the /etc/pki/tls/certs/ <file_name> .crt and the private key in the /etc/pki/tls/private/ <file_name> .key file. dns: <hostname_or_list_of_hostnames> Sets the hostnames that the Subject Alternative Names (SAN) field in the issued certificate contains. You can use a wildcard ( * ) or specify multiple names in YAML list format. ca: <ca_type> Defines how the certificate role requests the certificate. Set the variable to ipa if the host is enrolled in an IdM domain or self-sign to request a self-signed certificate. For details about all variables used in the playbook, see the /usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the SQL Server host, use the sqlcmd utility with the -N parameter to establish an encrypted connection to SQL server and run a query, for example: If the command succeeds, the connection to the server was TLS encrypted. Additional resources /usr/share/ansible/roles/microsoft.sql-server/README.md file Requesting certificates by using RHEL system roles Ansible vault 33.3. Installing and configuring SQL Server with custom storage paths by using the microsoft.sql.server Ansible system role When you use the microsoft.sql.server Ansible system role to install and configure a new SQL Server, you can customize the paths and modes of the data and log directories. For example, configure custom paths if you want to store databases and log files in a different directory with more storage. Important If you change the data or log path and re-run the playbook, the previously-used directories and all their content remains at the original path. Only new databases and logs are stored in the new location. Table 33.1. SQL Server default settings for data and log directories Type Directory Mode Owner Group Data /var/opt/mssql/data/ [a] mssql mssql Logs /var/opt/mssql/los/ [a] mssql mssql [a] If the directory exists, the role preserves the mode. If the directory does not exist, the role applies the default umask on the managed node when it creates the directory. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You installed the ansible-collection-microsoft-sql package or the microsoft.sql collection on the control node. The managed node has 2 GB or more RAM installed. The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: sa_pwd: <sa_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Edit an existing playbook file, for example ~/playbook.yml , and add the storage and log-related variables: --- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with custom storage paths ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: "{{ sa_pwd }}" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_datadir: /var/lib/mssql/ mssql_datadir_mode: '0700' mssql_logdir: /var/log/mssql/ mssql_logdir_mode: '0700' The settings specified in the example playbook include the following: mssql_datadir_mode and mssql_logdir_mode Set the permission modes. Specify the value in single quotes to ensure that the role parses the value as a string and not as an octal number. For details about all variables used in the playbook, see the /usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the mode of the data directory: Display the mode of the log directory: Additional resources /usr/share/ansible/roles/microsoft.sql-server/README.md file Ansible vault 33.4. Installing and configuring SQL Server with AD integration by using the microsoft.sql.server Ansible system role You can integrate Microsoft SQL Server into an Active Directory (AD) to enable AD users to authenticate to SQL Server. By using the microsoft.sql.server Ansible system role, you can automate this process and remotely install and configure SQL Server accordingly. Note that you must still perform manual steps in AD and SQL Server after you run the playbook. Depending on the RHEL version on the managed host, the version of SQL Server that you can install differs: RHEL 7.9: SQL Server 2017 and 2019 RHEL 8: SQL Server 2017, 2019, and 2022 RHEL 9.4 and later: SQL Server 2022 Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You installed the ansible-collection-microsoft-sql package or the microsoft.sql collection on the control node. The managed node has 2 GB or more RAM installed. The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later. An AD domain is available in the network. A reverse DNS (RDNS) zone exists in AD, and it contains Pointer (PTR) resource records for each AD domain controller (DC). The managed host's network settings use an AD DNS server. The managed host can resolve the following DNS entries: Both the hostnames and the fully-qualified domain names (FQDNs) of the AD DCs resolve to their IP addresses. The IP addresses of the AD DCs resolve to their FQDNs. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: sa_pwd: <sa_password> sql_pwd: <SQL_AD_password> ad_admin_pwd: <AD_admin_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with AD authentication ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: "{{ sa_pwd }}" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_ad_configure: true mssql_ad_join: true mssql_ad_sql_user: sqluser mssql_ad_sql_password: "{{ sql_pwd }}" ad_integration_realm: ad.example.com ad_integration_user: Administrator ad_integration_password: "{{ ad_admin_pwd }}" The settings specified in the example playbook include the following: mssql_ad_configure: true Enables authentication against AD. mssql_ad_join: true Uses the ad_integration RHEL system role to join the managed node to AD. The role uses the settings from the ad_integration_realm , ad_integration_user , and ad_integration_password variables to join the domain. mssql_ad_sql_user: <username> Sets the name of an AD account that the role should create in AD and SQL Server for administration purposes. ad_integration_user: <AD_user> Sets the name of an AD user with privileges to join machines to the domain and to create the AD user specified in mssql_ad_sql_user . For details about all variables used in the playbook, see the /usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: In your AD domain, enable 128 bit and 256 bit Kerberos authentication for the AD SQL user which you specified in the playbook. Use one of the following options: In the Active Directory Users and Computers application: Navigate to ad.example.com > Users > sqluser > Accounts . In the Account options list, select This account supports Kerberos AES 128 bit encryption and This account supports Kerberos AES 256 bit encryption . Click Apply . In PowerShell in admin mode, enter: Authorize AD users that should be able to authenticate to SQL Server. On the SQL Server, perform the following steps: Obtain a Kerberos ticket for the Administrator user: Authorize an AD user: Repeat this step for every AD user who should be able to access SQL Server. Verification On the managed node that runs SQL Server: Obtain a Kerberos ticket for an AD user: Use the sqlcmd utility to log in to SQL Server and run a query, for example: Additional resources /usr/share/ansible/roles/microsoft.sql-server/README.md file Ansible vault
[ "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "sa_pwd: <sa_password>", "--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with an existing private key and certificate ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_cert: sql_crt.pem mssql_tls_private_key: sql_cert.key mssql_tls_version: 1.2 mssql_tls_force: true", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U \"sa\" -P <sa_password> -Q 'SELECT SYSTEM_USER'", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "sa_pwd: <sa_password>", "--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with certificates issued by Red Hat IdM ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_tls_enable: true mssql_tls_certificates: - name: sql_cert dns: server.example.com ca: ipa", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "/opt/mssql-tools/bin/sqlcmd -N -S server.example.com -U \"sa\" -P <sa_password> -Q 'SELECT SYSTEM_USER'", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "sa_pwd: <sa_password>", "--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with custom storage paths ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_datadir: /var/lib/mssql/ mssql_datadir_mode: '0700' mssql_logdir: /var/log/mssql/ mssql_logdir_mode: '0700'", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ls -ld /var/lib/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/lib/mssql/", "ansible managed-node-01.example.com -m command -a 'ls -ld /var/log/mssql/' drwx------. 12 mssql mssql 4096 Jul 3 13:53 /var/log/mssql/", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "sa_pwd: <sa_password> sql_pwd: <SQL_AD_password> ad_admin_pwd: <AD_admin_password>", "--- - name: Installing and configuring Microsoft SQL Server hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: SQL Server with AD authentication ansible.builtin.include_role: name: microsoft.sql.server vars: mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true mssql_accept_microsoft_sql_server_standard_eula: true mssql_version: 2022 mssql_password: \"{{ sa_pwd }}\" mssql_edition: Developer mssql_tcp_port: 1433 mssql_manage_firewall: true mssql_ad_configure: true mssql_ad_join: true mssql_ad_sql_user: sqluser mssql_ad_sql_password: \"{{ sql_pwd }}\" ad_integration_realm: ad.example.com ad_integration_user: Administrator ad_integration_password: \"{{ ad_admin_pwd }}\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "C:\\> Set-ADUser -Identity sqluser -KerberosEncryptionType AES128,AES256", "kinit [email protected]", "/opt/mssql-tools/bin/sqlcmd -S. -Q 'CREATE LOGIN [AD\\<AD_user>] FROM WINDOWS;'", "kinit <AD_user> @ad.example.com", "/opt/mssql-tools/bin/sqlcmd -S. -Q 'SELECT SYSTEM_USER'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/assembly_configuring-microsoft-sql-server-using-microsoft-sql-server-ansible-role_automating-system-administration-by-using-rhel-system-roles
Chapter 2. Preparing Red Hat Enterprise Linux for a Red Hat Quay proof of concept deployment
Chapter 2. Preparing Red Hat Enterprise Linux for a Red Hat Quay proof of concept deployment Use the following procedures to configure Red Hat Enterprise Linux (RHEL) for a Red Hat Quay proof of concept deployment. 2.1. Install and register the RHEL server Use the following procedure to configure the Red Hat Enterprise Linux (RHEL) server for a Red Hat Quay proof of concept deployment. Procedure Install the latest RHEL 9 server. You can do a minimal, shell-access only install, or Server plus GUI if you want a desktop. Register and subscribe your RHEL server system as described in How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription-Manager Enter the following commands to register your system and list available subscriptions. Choose an available RHEL server subscription, attach to its pool ID, and upgrade to the latest software: # subscription-manager register --username=<user_name> --password=<password> # subscription-manager refresh # subscription-manager list --available # subscription-manager attach --pool=<pool_id> # yum update -y 2.2. Registry authentication Use the following procedure to authenticate your registry for a Red Hat Quay proof of concept. Procedure Set up authentication to registry.redhat.io by following the Red Hat Container Registry Authentication procedure. Setting up authentication allows you to pull the Quay container. Note This differs from earlier versions of Red Hat Quay, when the images were hosted on Quay.io. Enter the following command to log in to the registry: USD sudo podman login registry.redhat.io You are prompted to enter your username and password . 2.3. Firewall configuration If you have a firewall running on your system, you might have to add rules that allow access to Red Hat Quay. Use the following procedure to configure your firewall for a proof of concept deployment. Procedure The commands required depend on the ports that you have mapped on your system, for example: # firewall-cmd --permanent --add-port=80/tcp \ && firewall-cmd --permanent --add-port=443/tcp \ && firewall-cmd --permanent --add-port=5432/tcp \ && firewall-cmd --permanent --add-port=5433/tcp \ && firewall-cmd --permanent --add-port=6379/tcp \ && firewall-cmd --reload 2.4. IP addressing and naming services There are several ways to configure the component containers in Red Hat Quay so that they can communicate with each other, for example: Using a naming service . If you want your deployment to survive container restarts, which typically result in changed IP addresses, you can implement a naming service. For example, the dnsname plugin is used to allow containers to resolve each other by name. Using the host network . You can use the podman run command with the --net=host option and then use container ports on the host when specifying the addresses in the configuration. This option is susceptible to port conflicts when two containers want to use the same port. This method is not recommended. Configuring port mapping . You can use port mappings to expose ports on the host and then use these ports in combination with the host IP address or host name. This document uses port mapping and assumes a static IP address for your host system. Table 2.1. Sample proof of concept port mapping Component Port mapping Address Quay -p 80:8080 -p 443:8443 http://quay-server.example.com Postgres for Quay -p 5432:5432 quay-server.example.com:5432 Redis -p 6379:6379 quay-server.example.com:6379 Postgres for Clair V4 -p 5433:5432 quay-server.example.com:5433 Clair V4 -p 8081:8080 http://quay-server.example.com:8081
[ "subscription-manager register --username=<user_name> --password=<password> subscription-manager refresh subscription-manager list --available subscription-manager attach --pool=<pool_id> yum update -y", "sudo podman login registry.redhat.io", "firewall-cmd --permanent --add-port=80/tcp && firewall-cmd --permanent --add-port=443/tcp && firewall-cmd --permanent --add-port=5432/tcp && firewall-cmd --permanent --add-port=5433/tcp && firewall-cmd --permanent --add-port=6379/tcp && firewall-cmd --reload" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/proof_of_concept_-_deploying_red_hat_quay/poc-configuring-rhel-server
Chapter 2. The pcs Command Line Interface
Chapter 2. The pcs Command Line Interface The pcs command line interface provides the ability to control and configure corosync and pacemaker . The general format of the pcs command is as follows. 2.1. The pcs Commands The pcs commands are as follows. cluster Configure cluster options and nodes. For information on the pcs cluster command, see Chapter 3, Cluster Creation and Administration . resource Create and manage cluster resources. For information on the pcs cluster command, see Chapter 5, Configuring Cluster Resources , Chapter 7, Managing Cluster Resources , and Chapter 8, Advanced Resource types . stonith Configure fence devices for use with Pacemaker. For information on the pcs stonith command, see Chapter 4, Fencing: Configuring STONITH . constraint Manage resource constraints. For information on the pcs constraint command, see Chapter 6, Resource Constraints . property Set Pacemaker properties. For information on setting properties with the pcs property command, see Chapter 10, Pacemaker Cluster Properties . status View current cluster and resource status. For information on the pcs status command, see Section 2.5, "Displaying Status" . config Display complete cluster configuration in user-readable form. For information on the pcs config command, see Section 2.6, "Displaying the Full Cluster Configuration" .
[ "pcs [-f file ] [-h] [ commands ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-pcscommand-haar
Chapter 6. Troubleshooting notification failures
Chapter 6. Troubleshooting notification failures The notifications service event log enables Notifications administrators to see when notifications are not working properly. The event log provides a list of all triggered events on the Red Hat Hybrid Cloud Console account, and actions taken (as configured in the associated behavior group) for the past 14 days. In the Action taken column, each event shows the notification method highlighted in green or red to indicate the status of the message transmission. The filterable event log is a useful troubleshooting tool to see a failed notification event and identify potential issues with endpoints. After seeing a failed action in the event log, the Notifications administrator can check the endpoint and the status of the last five connection attempts on the Integrations screen. In the integrations service, the following connection statuses are reflected by color: Green: Five transmissions were successful. Red: Five transmissions were unsuccessful (timeout, 404 error, etc). Yellow: Connection is degraded; at least two of the five transmissions were unsuccessful. Unknown: The integration has not yet been called, or is not associated with a behavior group. The event log can answer questions related to receipt of emails. By showing the email action for an event as green, the event log enables a Notifications administrator to confirm that emails were sent successfully. Even when notifications and integrations are configured properly, individual users on the Hybrid Cloud Console account must configure their user preferences to receive emails. Before users receive notifications using the webhook integration type, a Notifications administrator must configure endpoints for your organization's preferred webhook application. Prerequisites You are logged in to the Hybrid Cloud Console as a user with Notifications administrator or Organization Administrator permissions. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > Event Log . Filter the events list by event, application, application bundle, action type, or action status. Select the time frame to show events from today, yesterday, the last seven days, the last 14 days (default), or set a custom range within the last 14 days. Sort the Date and time column in ascending or descending order. Navigate to Settings > Notifications > Configure Events , and verify or change settings by event. Ask users to check their user preferences for receiving email notifications. Even when notifications and integrations are configured properly, individual users on the Hybrid Cloud Console account must configure their user preferences to receive emails. Additional resources For more information about network and firewall configuration, see Firewall Configuration for accessing Red Hat Insights / Hybrid Cloud Console Integrations & Notifications . To configure your personal preferences for receiving notifications, see Configuring user preferences for email notifications .
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console/proc-troubleshoot_notifications
Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV)
Chapter 1. Understanding Red Hat Network Functions Virtualization (NFV) Network Functions Virtualization (NFV) is a software-based solution that helps the Communication Service Providers (CSPs) to move beyond the traditional, proprietary hardware to achieve greater efficiency and agility while reducing the operational costs. An NFV environment allows for IT and network convergence by providing a virtualized infrastructure using the standard virtualization technologies that run on standard hardware devices such as switches, routers, and storage to virtualize network functions (VNFs). The management and orchestration logic deploys and sustains these services. NFV also includes a Systems Administration, Automation and Life-Cycle Management thereby reducing the manual work necessary. 1.1. Advantages of NFV The main advantages of implementing network functions virtualization (NFV) are as follows: Accelerates the time-to-market by allowing you to to quickly deploy and scale new networking services to address changing demands. Supports innovation by enabling service developers to self-manage their resources and prototype using the same platform that will be used in production. Addresses customer demands in hours or minutes instead of weeks or days, without sacrificing security or performance. Reduces capital expenditure because it uses commodity-off-the-shelf hardware instead of expensive tailor-made equipment. Uses streamlined operations and automation that optimize day-to-day tasks to improve employee productivity and reduce operational costs. 1.2. Supported Configurations for NFV Deployments You can use the Red Hat OpenStack Platform director toolkit to isolate specific network types, for example, external, project, internal API, and so on. You can deploy a network on a single network interface, or distributed over a multiple-host network interface. With Open vSwitch you can create bonds by assigning multiple interfaces to a single bridge. Configure network isolation in a Red Hat OpenStack Platform installation with template files. If you do not provide template files, the service networks deploy on the provisioning network. There are two types of template configuration files: network-environment.yaml This file contains network details, such as subnets and IP address ranges, for the overcloud nodes. This file also contains the different settings that override the default parameter values for various scenarios. Host network templates, for example, compute.yaml and controller.yaml These templates define the network interface configuration for the overcloud nodes. The values of the network details are provided by the network-environment.yaml file. These heat template files are located at /usr/share/openstack-tripleo-heat-templates/ on the undercloud node. For samples of these heat template files for NFV, see Sample DPDK SR-IOV YAML files . The Hardware requirements and Software requirements sections provide more details on how to plan and configure the heat template files for NFV using the Red Hat OpenStack Platform director. You can edit YAML files to configure NFV. For an introduction to the YAML file format, see YAML in a Nutshell . Data Plane Development Kit (DPDK) and Single Root I/O Virtualization (SR-IOV) Red Hat OpenStack Platform (RHOSP) supports NFV deployments with the inclusion of automated OVS-DPDK and SR-IOV configuration. Important Red Hat does not support the use of OVS-DPDK for non-NFV workloads. If you need OVS-DPDK functionality for non-NFV workloads, contact your Technical Account Manager (TAM) or open a customer service request case to discuss a Support Exception and other options. To open a customer service request case, go to Create a case and choose Account > Customer Service Request . Hyper-converged Infrastructure (HCI) You can colocate the Compute sub-system with the Red Hat Ceph Storage nodes. This hyper-converged model delivers lower cost of entry, smaller initial deployment footprints, maximized capacity utilization, and more efficient management in NFV use cases. For more information about HCI, see Deploying a hyperconverged infrastructure . Composable roles You can use composable roles to create custom deployments. Composable roles allow you to add or remove services from each role. For more information about the Composable Roles, see Composable services and custom roles in Customizing your Red Hat OpenStack Platform deployment . Open vSwitch (OVS) with LACP As of OVS 2.9, LACP with OVS is fully supported. This is not recommended for Openstack control plane traffic, as OVS or Openstack Networking interruptions might interfere with management. For more information, see Open vSwitch (OVS) bonding options in Installing and managing Red Hat OpenStack Platform with director . OVS Hardware offload Red Hat OpenStack Platform supports, with limitations, the deployment of OVS hardware offload. For information about deploying OVS with hardware offload, see Configuring OVS hardware offload . Open Virtual Network (OVN) The following NFV OVN configurations are available in RHOSP 16.1.4: Deploying OVN with OVS-DPDK and SR-IOV Deploying OVN with OVS TC Flower offload 1.3. NFV data plane connectivity With the introduction of NFV, more networking vendors are starting to implement their traditional devices as VNFs. While the majority of networking vendors are considering virtual machines, some are also investigating a container-based approach as a design choice. An OpenStack-based solution should be rich and flexible due to two primary reasons: Application readiness - Network vendors are currently in the process of transforming their devices into VNFs. Different VNFs in the market have different maturity levels; common barriers to this readiness include enabling RESTful interfaces in their APIs, evolving their data models to become stateless, and providing automated management operations. OpenStack should provide a common platform for all. Broad use-cases - NFV includes a broad range of applications that serve different use-cases. For example, Virtual Customer Premise Equipment (vCPE) aims at providing a number of network functions such as routing, firewall, virtual private network (VPN), and network address translation (NAT) at customer premises. Virtual Evolved Packet Core (vEPC), is a cloud architecture that provides a cost-effective platform for the core components of Long-Term Evolution (LTE) network, allowing dynamic provisioning of gateways and mobile endpoints to sustain the increased volumes of data traffic from smartphones and other devices. These use cases are implemented using different network applications and protocols, and require different connectivity, isolation, and performance characteristics from the infrastructure. It is also common to separate between control plane interfaces and protocols and the actual forwarding plane. OpenStack must be flexible enough to offer different datapath connectivity options. In principle, there are two common approaches for providing data plane connectivity to virtual machines: Direct hardware access bypasses the linux kernel and provides secure direct memory access (DMA) to the physical NIC using technologies such as PCI Passthrough or single root I/O virtualization (SR-IOV) for both Virtual Function (VF) and Physical Function (PF) pass-through. Using a virtual switch (vswitch) , implemented as a software service of the hypervisor. Virtual machines are connected to the vSwitch using virtual interfaces (vNICs), and the vSwitch is capable of forwarding traffic between virtual machines, as well as between virtual machines and the physical network. Some of the fast data path options are as follows: Single Root I/O Virtualization (SR-IOV) is a standard that makes a single PCI hardware device appear as multiple virtual PCI devices. It works by introducing Physical Functions (PFs), which are the fully featured PCIe functions that represent the physical hardware ports, and Virtual Functions (VFs), which are lightweight functions that are assigned to the virtual machines. To the VM, the VF resembles a regular NIC that communicates directly with the hardware. NICs support multiple VFs. Open vSwitch (OVS) is an open source software switch that is designed to be used as a virtual switch within a virtualized server environment. OVS supports the capabilities of a regular L2-L3 switch and also offers support to the SDN protocols such as OpenFlow to create user-defined overlay networks (for example, VXLAN). OVS uses Linux kernel networking to switch packets between virtual machines and across hosts using physical NIC. OVS now supports connection tracking (Conntrack) with built-in firewall capability to avoid the overhead of Linux bridges that use iptables/ebtables. Open vSwitch for Red Hat OpenStack Platform environments offers default OpenStack Networking (neutron) integration with OVS. Data Plane Development Kit (DPDK) consists of a set of libraries and poll mode drivers (PMD) for fast packet processing. It is designed to run mostly in the user-space, enabling applications to perform their own packet processing directly from or to the NIC. DPDK reduces latency and allows more packets to be processed. DPDK Poll Mode Drivers (PMDs) run in busy loop, constantly scanning the NIC ports on host and vNIC ports in guest for arrival of packets. DPDK accelerated Open vSwitch (OVS-DPDK) is Open vSwitch bundled with DPDK for a high performance user-space solution with Linux kernel bypass and direct memory access (DMA) to physical NICs. The idea is to replace the standard OVS kernel data path with a DPDK-based data path, creating a user-space vSwitch on the host that uses DPDK internally for its packet forwarding. The advantage of this architecture is that it is mostly transparent to users. The interfaces it exposes, such as OpenFlow, OVSDB, the command line, remain mostly the same. 1.4. ETSI NFV Architecture The European Telecommunications Standards Institute (ETSI) is an independent standardization group that develops standards for information and communications technologies (ICT) in Europe. Network functions virtualization (NFV) focuses on addressing problems involved in using proprietary hardware devices. With NFV, the necessity to install network-specific equipment is reduced, depending upon the use case requirements and economic benefits. The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV) sets the requirements, reference architecture, and the infrastructure specifications necessary to ensure virtualized functions are supported. Red Hat is offering an open-source based cloud-optimized solution to help the Communication Service Providers (CSP) to achieve IT and network convergence. Red Hat adds NFV features such as single root I/O virtualization (SR-IOV) and Open vSwitch with Data Plane Development Kit (OVS-DPDK) to Red Hat OpenStack. 1.5. NFV ETSI architecture and components In general, a network functions virtualization (NFV) platform has the following components: Figure 1.1. NFV ETSI architecture and components Virtualized Network Functions (VNFs) - the software implementation of routers, firewalls, load balancers, broadband gateways, mobile packet processors, servicing nodes, signalling, location services, and other network functions. NFV Infrastructure (NFVi) - the physical resources (compute, storage, network) and the virtualization layer that make up the infrastructure. The network includes the datapath for forwarding packets between virtual machines and across hosts. This allows you to install VNFs without being concerned about the details of the underlying hardware. NFVi forms the foundation of the NFV stack. NFVi supports multi-tenancy and is managed by the Virtual Infrastructure Manager (VIM). Enhanced Platform Awareness (EPA) improves the virtual machine packet forwarding performance (throughput, latency, jitter) by exposing low-level CPU and NIC acceleration components to the VNF. NFV Management and Orchestration (MANO) - the management and orchestration layer focuses on all the service management tasks required throughout the life cycle of the VNF. The main goals of MANO is to allow service definition, automation, error-correlation, monitoring, and life-cycle management of the network functions offered by the operator to its customers, decoupled from the physical infrastructure. This decoupling requires additional layers of management, provided by the Virtual Network Function Manager (VNFM). VNFM manages the life cycle of the virtual machines and VNFs by either interacting directly with them or through the Element Management System (EMS) provided by the VNF vendor. The other important component defined by MANO is the Orchestrator, also known as NFVO. NFVO interfaces with various databases and systems including Operations/Business Support Systems (OSS/BSS) on the top and the VNFM on the bottom. If the NFVO wants to create a new service for a customer, it asks the VNFM to trigger the instantiation of a VNF, which may result in multiple virtual machines. Operations and Business Support Systems (OSS/BSS) - provides the essential business function applications, for example, operations support and billing. The OSS/BSS needs to be adapted to NFV, integrating with both legacy systems and the new MANO components. The BSS systems set policies based on service subscriptions and manage reporting and billing. Systems Administration, Automation and Life-Cycle Management - manages system administration, automation of the infrastructure components and life cycle of the NFVi platform. 1.6. Red Hat NFV components Red Hat's solution for NFV includes a range of products that can act as the different components of the NFV framework in the ETSI model. The following products from the Red Hat portfolio integrate into an NFV solution: Red Hat OpenStack Platform - Supports IT and NFV workloads. The Enhanced Platform Awareness (EPA) features deliver deterministic performance improvements through CPU Pinning, Huge pages, Non-Uniform Memory Access (NUMA) affinity and network adaptors (NICs) that support SR-IOV and OVS-DPDK. Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host - Create virtual machines and containers as VNFs. Red Hat Ceph Storage - Provides the the unified elastic and high-performance storage layer for all the needs of the service provider workloads. Red Hat JBoss Middleware and OpenShift Enterprise by Red Hat - Optionally provide the ability to modernize the OSS/BSS components. Red Hat CloudForms - Provides a VNF manager and presents data from multiple sources, such as the VIM and the NFVi in a unified display. Red Hat Satellite and Ansible by Red Hat - Optionally provide enhanced systems administration, automation and life-cycle management. 1.7. NFV installation summary The Red Hat OpenStack Platform director installs and manages a complete OpenStack environment. The director is based on the upstream OpenStack TripleO project, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of the OpenStack components to install a fully operational OpenStack environment; this includes a minimal OpenStack node called the undercloud. The undercloud provisions and controls the overcloud (a series of bare metal systems used as the production OpenStack nodes). The director provides a simple method for installing a complete Red Hat OpenStack Platform environment that is both lean and robust. For more information on installing the undercloud and overcloud, see Red Hat OpenStack Platform Installing and managing Red Hat OpenStack Platform with director . To install the NFV features, complete the following additional steps: Include SR-IOV and PCI Passthrough parameters in your network-environment.yaml file, update the post-install.yaml file for CPU tuning, modify the compute.yaml file, and run the overcloud_deploy.sh script to deploy the overcloud. Install the DPDK libraries and drivers for fast packets processing by polling data directly from the NICs. Include the DPDK parameters in your network-environment.yaml file, update the post-install.yaml files for CPU tuning, update the compute.yaml file to set the bridge with DPDK port, update the controller.yaml file to set the bridge and an interface with VLAN configured, and run the overcloud_deploy.sh script to deploy the overcloud.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/understanding-nfv_rhosp-nfv
Chapter 23. Using different DNS servers for different domains
Chapter 23. Using different DNS servers for different domains By default, Red Hat Enterprise Linux (RHEL) sends all DNS requests to the first DNS server specified in the /etc/resolv.conf file. If this server does not reply, RHEL tries the server in this file until it finds a working one. In environments where one DNS server cannot resolve all domains, administrators can configure RHEL to send DNS requests for a specific domain to a selected DNS server. For example, you connect a server to a Virtual Private Network (VPN), and hosts in the VPN use the example.com domain. In this case, you can configure RHEL to process DNS queries in the following way: Send only DNS requests for example.com to the DNS server in the VPN network. Send all other requests to the DNS server that is configured in the connection profile with the default gateway. 23.1. Using dnsmasq in NetworkManager to send DNS requests for a specific domain to a selected DNS server You can configure NetworkManager to start an instance of dnsmasq . This DNS caching server then listens on port 53 on the loopback device. Consequently, this service is only reachable from the local system and not from the network. With this configuration, NetworkManager adds the nameserver 127.0.0.1 entry to the /etc/resolv.conf file, and dnsmasq dynamically routes DNS requests to the corresponding DNS servers specified in the NetworkManager connection profiles. Prerequisites The system has multiple NetworkManager connections configured. A DNS server and search domain are configured in the NetworkManager connection profile that is responsible for resolving a specific domain. For example, to ensure that the DNS server specified in a VPN connection resolves queries for the example.com domain, the VPN connection profile must contain the following settings: A DNS server that can resolve example.com A search domain set to example.com in the ipv4.dns-search and ipv6.dns-search parameters The dnsmasq service is not running or configured to listen on a different interface than localhost . Procedure Install the dnsmasq package: Edit the /etc/NetworkManager/NetworkManager.conf file, and set the following entry in the [main] section: Reload the NetworkManager service: Verification Search in the systemd journal of the NetworkManager unit for which domains the service uses a different DNS server: Use the tcpdump packet sniffer to verify the correct route of DNS requests: Install the tcpdump package: On one terminal, start tcpdump to capture DNS traffic on all interfaces: On a different terminal, resolve host names for a domain for which an exception exists and another domain, for example: Verify in the tcpdump output that Red Hat Enterprise Linux sends only DNS queries for the example.com domain to the designated DNS server and through the corresponding interface: Red Hat Enterprise Linux sends the DNS query for www.example.com to the DNS server on 198.51.100.7 and the query for www.redhat.com to 192.0.2.1 . Troubleshooting Verify that the nameserver entry in the /etc/resolv.conf file refers to 127.0.0.1 : If the entry is missing, check the dns parameter in the /etc/NetworkManager/NetworkManager.conf file. Verify that the dnsmasq service listens on port 53 on the loopback device: If the service does not listen on 127.0.0.1:53 , check the journal entries of the NetworkManager unit: 23.2. Using systemd-resolved in NetworkManager to send DNS requests for a specific domain to a selected DNS server You can configure NetworkManager to start an instance of systemd-resolved . This DNS stub resolver then listens on port 53 on IP address 127.0.0.53 . Consequently, this stub resolver is only reachable from the local system and not from the network. With this configuration, NetworkManager adds the nameserver 127.0.0.53 entry to the /etc/resolv.conf file, and systemd-resolved dynamically routes DNS requests to the corresponding DNS servers specified in the NetworkManager connection profiles. Important The systemd-resolved service is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. For a supported solution, see Using dnsmasq in NetworkManager to send DNS requests for a specific domain to a selected DNS server . Prerequisites The system has multiple NetworkManager connections configured. A DNS server and search domain are configured in the NetworkManager connection profile that is responsible for resolving a specific domain. For example, to ensure that the DNS server specified in a VPN connection resolves queries for the example.com domain, the VPN connection profile must contain the following settings: A DNS server that can resolve example.com A search domain set to example.com in the ipv4.dns-search and ipv6.dns-search parameters Procedure Enable and start the systemd-resolved service: Edit the /etc/NetworkManager/NetworkManager.conf file, and set the following entry in the [main] section: Reload the NetworkManager service: Verification Display the DNS servers systemd-resolved uses and for which domains the service uses a different DNS server: The output confirms that systemd-resolved uses different DNS servers for the example.com domain. Use the tcpdump packet sniffer to verify the correct route of DNS requests: Install the tcpdump package: On one terminal, start tcpdump to capture DNS traffic on all interfaces: On a different terminal, resolve host names for a domain for which an exception exists and another domain, for example: Verify in the tcpdump output that Red Hat Enterprise Linux sends only DNS queries for the example.com domain to the designated DNS server and through the corresponding interface: Red Hat Enterprise Linux sends the DNS query for www.example.com to the DNS server on 198.51.100.7 and the query for www.redhat.com to 192.0.2.1 . Troubleshooting Verify that the nameserver entry in the /etc/resolv.conf file refers to 127.0.0.53 : If the entry is missing, check the dns parameter in the /etc/NetworkManager/NetworkManager.conf file. Verify that the systemd-resolved service listens on port 53 on the local IP address 127.0.0.53 : If the service does not listen on 127.0.0.53:53 , check if the systemd-resolved service is running.
[ "yum install dnsmasq", "dns=dnsmasq", "systemctl reload NetworkManager", "journalctl -xeu NetworkManager Jun 02 13:30:17 <client_hostname>_ dnsmasq[5298]: using nameserver 198.51.100.7#53 for domain example.com", "yum install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 IP server .43534 > 198.51.100.7 .domain: 50121+ [1au] A? www.example.com. (33) 13:52:57.753235 IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.1", "ss -tulpn | grep \"127.0.0.1:53\" udp UNCONN 0 0 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=18)) tcp LISTEN 0 32 127.0.0.1:53 0.0.0.0:* users:((\"dnsmasq\",pid=7340,fd=19))", "journalctl -u NetworkManager", "systemctl --now enable systemd-resolved", "dns=systemd-resolved", "systemctl reload NetworkManager", "resolvectl Link 2 ( enp1s0 ) Current Scopes: DNS Protocols: +DefaultRoute Current DNS Server: 192.0.2.1 DNS Servers: 192.0.2.1 Link 3 ( tun0 ) Current Scopes: DNS Protocols: -DefaultRoute Current DNS Server: 198.51.100.7 DNS Servers: 198.51.100.7 203.0.113.19 DNS Domain: example.com", "yum install tcpdump", "tcpdump -i any port 53", "host -t A www.example.com host -t A www.redhat.com", "13:52:42.234533 IP server .43534 > 198.51.100.7 .domain: 50121+ [1au] A? www.example.com. (33) 13:52:57.753235 IP server .40864 > 192.0.2.1 .domain: 6906+ A? www.redhat.com. (33)", "cat /etc/resolv.conf nameserver 127.0.0.53", "ss -tulpn | grep \"127.0.0.53\" udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=12)) tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:((\"systemd-resolve\",pid=1050,fd=13))" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/using-different-dns-servers-for-different-domains_configuring-and-managing-networking
Chapter 2. Deploying OpenShift Data Foundation on Microsoft Azure
Chapter 2. Deploying OpenShift Data Foundation on Microsoft Azure You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Microsoft Azure installer-provisioned infrastructure (IPI) (type: managed-csi ) that enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . If you want to use Azure Vault as the key management service provider, make sure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to managed-csi . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_microsoft_azure/deploying-openshift-data-foundation-on-microsoft-azure_azure
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to Home Dashboards Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards. 3.1. About the OpenShift Container Platform dashboards page The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment). Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about: CPU time Memory allocation Storage consumed Network resources consumed Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption. Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage).
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/web_console/using-dashboard-to-get-cluster-info
Chapter 3. Viewing your Insights results
Chapter 3. Viewing your Insights results You can view system and infrastructure results in the Red Hat Insights for Red Hat Enterprise Linux application dashboard. The dashboard provides links to each available Insights service. This includes advisor, vulnerability, compliance, policies and patch. From this starting point, you can proactively identify and manage issues affecting system security, performance, stability and availability. Prerequisites The insights-client package is installed on the system. You are logged in to the Red Hat Hybrid Cloud Console. Procedure Navigate to Red Hat Insights > RHEL > Inventory in the Hybrid Cloud Console. Search for your system name and confirm that it exists in the inventory.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_red_hat_insights_on_existing_rhel_systems_managed_by_red_hat_cloud_access/viewing-insights-results_deploying-insights-with-rhca
Chapter 9. Migrating your applications
Chapter 9. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 9.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 9.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 9.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an OpenShift Container Platform cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 9.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 9.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 9.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" }," ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/migrating-applications-with-mtc
5.9. Configuring Fencing Levels
5.9. Configuring Fencing Levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing topology section in the configuration. Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands sets up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. As of Red Hat Enterprise Linux 7.4, you can specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and ` node3 to use fence devices apc1 and ` apc2 , and nodes ` node4 , node5 , and ` node6 to use fence devices apc3 and ` apc4 . The following commands yield the same results by using node attribute matching.
[ "pcs stonith level add level node devices", "pcs stonith level", "pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc", "pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]", "pcs stonith level clear [ node | stonith_id (s)]", "pcs stonith level clear dev_a,dev_b", "pcs stonith level verify", "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-fencelevels-HAAR
4.3. Configuring Static Routes with GUI
4.3. Configuring Static Routes with GUI To set a static route, open the IPv4 or IPv6 settings window for the connection you want to configure. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for instructions on how to do that. Routes Address - Enter the IP address of a remote network, sub-net, or host. Netmask - The netmask or prefix length of the IP address entered above. Gateway - The IP address of the gateway leading to the remote network, sub-net, or host entered above. Metric - A network cost, a preference value to give to this route. Lower values will be preferred over higher values. Automatic When Automatic is ON , routes from RA or DHCP are used, but you can also add additional static routes. When OFF , only static routes you define are used. Use this connection only for resources on its network Select this check box to prevent the connection from becoming the default route. Typical examples are where a connection is a VPN tunnel or a leased line to a head office and you do not want any Internet-bound traffic to pass over the connection. Selecting this option means that only traffic specifically destined for routes learned automatically over the connection or entered here manually will be routed over the connection.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_static_routes_with_gui
Chapter 28. Ref
Chapter 28. Ref Overview The Ref expression language is really just a way to look up a custom Expression from the Registry . This is particular convenient to use in the XML DSL. The Ref language is part of camel-core . Static import To use the Ref language in your Java application code, include the following import statement in your Java source files: XML example For example, the splitter pattern can reference a custom expression using the Ref language, as follows: Java example The preceding route can also be implemented in the Java DSL, as follows:
[ "import static org.apache.camel.language.ref.RefLanguage.ref;", "<beans ...> <bean id=\" myExpression \" class=\"com.mycompany.MyCustomExpression\"/> <camelContext> <route> <from uri=\"seda:a\"/> <split> <ref> myExpression </ref> <to uri=\"mock:b\"/> </split> </route> </camelContext> </beans>", "from(\"seda:a\") .split().ref(\"myExpression\") .to(\"seda:b\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/ref
Chapter 3. Distributed tracing installation
Chapter 3. Distributed tracing installation 3.1. Installing distributed tracing You can install Red Hat OpenShift distributed tracing on OpenShift Container Platform in either of two ways: You can install Red Hat OpenShift distributed tracing as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install Red Hat OpenShift distributed tracing as part of a service mesh, follow the Red Hat Service Mesh Installation instructions. You must install Red Hat OpenShift distributed tracing in the same namespace as your service mesh, that is, the ServiceMeshControlPlane and the Red Hat OpenShift distributed tracing resources must be in the same namespace. If you do not want to install a service mesh, you can use the Red Hat OpenShift distributed tracing Operators to install distributed tracing by itself. To install Red Hat OpenShift distributed tracing without a service mesh, use the following instructions. 3.1.1. Prerequisites Before you can install Red Hat OpenShift distributed tracing, review the installation activities, and ensure that you meet the prerequisites: Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.7 overview . Install OpenShift Container Platform 4.7. Install OpenShift Container Platform 4.7 on AWS Install OpenShift Container Platform 4.7 on user-provisioned AWS Install OpenShift Container Platform 4.7 on bare metal Install OpenShift Container Platform 4.7 on vSphere Install the version of the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version and add it to your path. An account with the cluster-admin role. 3.1.2. Red Hat OpenShift distributed tracing installation overview The steps for installing Red Hat OpenShift distributed tracing are as follows: Review the documentation and determine your deployment strategy. If your deployment strategy requires persistent storage, install the OpenShift Elasticsearch Operator via the OperatorHub. Install the Red Hat OpenShift distributed tracing platform Operator via the OperatorHub. Modify the custom resource YAML file to support your deployment strategy. Deploy one or more instances of Red Hat OpenShift distributed tracing platform to your OpenShift Container Platform environment. 3.1.3. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing, giving demonstrations, or using Red Hat OpenShift distributed tracing platform in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. 3.1.4. Installing the Red Hat OpenShift distributed tracing platform Operator To install Red Hat OpenShift distributed tracing platform, you use the OperatorHub to install the Red Hat OpenShift distributed tracing platform Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributing tracing platform into the filter to locate the Red Hat OpenShift distributed tracing platform Operator. Click the Red Hat OpenShift distributed tracing platform Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing platform Operator shows a status of "Succeeded" before continuing. 3.1.5. Installing the Red Hat OpenShift distributed tracing data collection Operator Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . To install Red Hat OpenShift distributed tracing data collection, you use the OperatorHub to install the Red Hat OpenShift distributed tracing data collection Operator. By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type distributing tracing data collection into the filter to locate the Red Hat OpenShift distributed tracing data collection Operator. Click the Red Hat OpenShift distributed tracing data collection Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, accept the default stable Update channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait until you see that the Red Hat OpenShift distributed tracing data collection Operator shows a status of "Succeeded" before continuing. 3.2. Configuring and deploying distributed tracing The Red Hat OpenShift distributed tracing platform Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform resources. You can either install the default configuration or modify the file to better suit your business requirements. Red Hat OpenShift distributed tracing platform has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a distributed tracing platform instance the Operator uses this configuration file to create the objects necessary for the deployment. Jaeger custom resource file showing deployment strategy apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1 1 The Red Hat OpenShift distributed tracing platform Operator currently supports the following deployment strategies: allInOne (Default) - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage. Note In-memory storage is not persistent, which means that if the distributed tracing platform instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. production - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. streaming - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform ( AMQ Streams / Kafka ). Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. Note The streaming deployment strategy is currently unsupported on IBM Z. Note There are two ways to install and use Red Hat OpenShift distributed tracing, as part of a service mesh or as a stand alone component. If you have installed distributed tracing as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the ServiceMeshControlPlane but for completely control you should configure a Jaeger CR and then reference your distributed tracing configuration file in the ServiceMeshControlPlane . 3.2.1. Deploying the distributed tracing default strategy from the web console The custom resource definition (CRD) defines the configuration used when you deploy an instance of Red Hat OpenShift distributed tracing. The default CR is named jaeger-all-in-one-inmemory and it is configured with minimal resources to ensure that you can successfully install it on a default OpenShift Container Platform installation. You can use this default configuration to create a Red Hat OpenShift distributed tracing platform instance that uses the AllInOne deployment strategy, or you can define your own custom resource file. Note In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the production or streaming strategies, which use Elasticsearch as the default storage. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Details tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, to install using the defaults, click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-all-in-one-inmemory . On the Jaeger Details page, click the Resources tab. Wait until the pod has a status of "Running" before continuing. 3.2.1.1. Deploying the distributed tracing default strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The Red Hat OpenShift distributed tracing platform Operator has been installed and verified. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger.yaml that contains the following text: Example jaeger-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s 3.2.2. Deploying the distributed tracing production strategy from the web console The production deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your production YAML configuration, for example: Example jaeger-production.yaml file with Elasticsearch apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *' Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-prod-elasticsearch . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.2.1. Deploying the distributed tracing production strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The OpenShift Elasticsearch Operator has been installed. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-production.yaml that contains the text of the example file in the procedure. Run the following command to deploy distributed tracing platform: USD oc create -n tracing-system -f jaeger-production.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s 3.2.3. Deploying the distributed tracing streaming strategy from the web console The streaming deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important. The streaming strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform. Note The streaming strategy requires an additional Red Hat subscription for AMQ Streams. If you do not have an AMQ Streams subscription, contact your sales representative for more information. Note The streaming deployment strategy is currently unsupported on IBM Z. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a new project, for example tracing-system . Note If you are installing as part of Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource, for example istio-system . Navigate to Home Projects . Click Create Project . Enter tracing-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select tracing-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift distributed tracing platform Operator. On the Overview tab, under Provided APIs , the Operator provides a single link. Under Jaeger , click Create Instance . On the Create Jaeger page, replace the default all-in-one YAML text with your streaming YAML configuration, for example: Example jaeger-streaming.yaml file apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 Click Create to create the distributed tracing platform instance. On the Jaegers page, click the name of the distributed tracing platform instance, for example, jaeger-streaming . On the Jaeger Details page, click the Resources tab. Wait until all the pods have a status of "Running" before continuing. 3.2.3.1. Deploying the distributed tracing streaming strategy from the CLI Follow this procedure to create an instance of distributed tracing platform from the command line. Prerequisites The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance. The Red Hat OpenShift distributed tracing platform Operator has been installed. You have reviewed the instructions for how to customize the deployment. You have access to the OpenShift CLI ( oc ) that matches your OpenShift Container Platform version. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443 Create a new project named tracing-system . USD oc new-project tracing-system Create a custom resource file named jaeger-streaming.yaml that contains the text of the example file in the procedure. Run the following command to deploy Jaeger: USD oc create -n tracing-system -f jaeger-streaming.yaml Run the following command to watch the progress of the pods during the installation process: USD oc get pods -n tracing-system -w After the installation process has completed, you should see output similar to the following example: NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s 3.2.4. Validating your deployment 3.2.4.1. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing installed, and Red Hat OpenShift distributed tracing platform installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 3.2.5. Customizing your deployment 3.2.5.1. Deployment best practices Red Hat OpenShift distributed tracing instance names must be unique. If you want to have multiple Red Hat OpenShift distributed tracing platform instances and are using sidecar injected agents, then the Red Hat OpenShift distributed tracing platform instances should have unique names, and the injection annotation should explicitly specify the Red Hat OpenShift distributed tracing platform instance name the tracing data should be reported to. If you have a multitenant implementation and tenants are separated by namespaces, deploy a Red Hat OpenShift distributed tracing platform instance to each tenant namespace. Agent as a daemonset is not supported for multitenant installations or Red Hat OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases. If you are installing distributed tracing as part of Red Hat OpenShift Service Mesh, the distributed tracing resources must be installed in the same namespace as the ServiceMeshControlPlane resource. For information about configuring persistent storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 3.2.5.2. Distributed tracing default configuration options The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform resources. You can modify these parameters to customize your distributed tracing platform implementation to your business needs. Jaeger generic YAML example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {} Table 3.1. Jaeger parameters Parameter Description Values Default value apiVersion: API version to use when creating the object. jaegertracing.io/v1 jaegertracing.io/v1 kind: Defines the kind of Kubernetes object to create. jaeger metadata: Data that helps uniquely identify the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. name: Name for the object. The name of your distributed tracing platform instance. jaeger-all-in-one-inmemory spec: Specification for the object to be created. Contains all of the configuration parameters for your distributed tracing platform instance. When a common definition for all Jaeger components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node. N/A strategy: Jaeger deployment strategy allInOne , production , or streaming allInOne allInOne: Because the allInOne image deploys the Agent, Collector, Query, Ingester, and Jaeger UI in a single pod, configuration for this deployment must nest component configuration under the allInOne parameter. agent: Configuration options that define the Agent. collector: Configuration options that define the Jaeger Collector. sampling: Configuration options that define the sampling strategies for tracing. storage: Configuration options that define the storage. All storage-related options must be placed under storage , rather than under the allInOne or other component options. query: Configuration options that define the Query service. ingester: Configuration options that define the Ingester service. The following example YAML is the minimum required to create a Red Hat OpenShift distributed tracing platform deployment using the default settings. Example minimum required dist-tracing-all-in-one.yaml apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory 3.2.5.3. Jaeger Collector configuration options The Jaeger Collector is the component responsible for receiving the spans that were captured by the tracer and writing them to persistent Elasticsearch storage when using the production strategy, or to AMQ Streams when using the streaming strategy. The Collectors are stateless and thus many instances of Jaeger Collector can be run in parallel. Collectors require almost no configuration, except for the location of the Elasticsearch cluster. Table 3.2. Parameters used by the Operator to define the Jaeger Collector Parameter Description Values Specifies the number of Collector replicas to create. Integer, for example, 5 Table 3.3. Configuration parameters passed to the Collector Parameter Description Values Configuration options that define the Jaeger Collector. The number of workers pulling from the queue. Integer, for example, 50 The size of the Collector queue. Integer, for example, 2000 The topic parameter identifies the Kafka configuration used by the Collector to produce the messages, and the Ingester to consume the messages. Label for the producer. Identifies the Kafka configuration used by the Collector to produce the messages. If brokers are not specified, and you have AMQ Streams 1.4.0+ installed, the Red Hat OpenShift distributed tracing platform Operator will self-provision Kafka. Logging level for the Collector. Possible values: debug , info , warn , error , fatal , panic . 3.2.5.4. Distributed tracing sampling configuration options The Red Hat OpenShift distributed tracing platform Operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler. While all traces are generated, only a few are sampled. Sampling a trace marks the trace for further processing and storage. Note This is not relevant if a trace was started by the Envoy proxy, as the sampling decision is made there. The Jaeger sampling decision is only relevant when the trace is started by an application using the client. When a service receives a request that contains no trace context, the client starts a new trace, assigns it a random trace ID, and makes a sampling decision based on the currently installed sampling strategy. The sampling decision propagates to all subsequent requests in the trace so that other services are not making the sampling decision again. distributed tracing platform libraries support the following samplers: Probabilistic - The sampler makes a random sampling decision with the probability of sampling equal to the value of the sampling.param property. For example, using sampling.param=0.1 samples approximately 1 in 10 traces. Rate Limiting - The sampler uses a leaky bucket rate limiter to ensure that traces are sampled with a certain constant rate. For example, using sampling.param=2.0 samples requests with the rate of 2 traces per second. Table 3.4. Jaeger sampling options Parameter Description Values Default value Configuration options that define the sampling strategies for tracing. If you do not provide configuration, the Collectors will return the default probabilistic sampling policy with 0.001 (0.1%) probability for all services. Sampling strategy to use. See descriptions above. Valid values are probabilistic , and ratelimiting . probabilistic Parameters for the selected sampling strategy. Decimal and integer values (0, .1, 1, 10) 1 This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled. Probabilistic sampling example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5 If there are no user-supplied configurations, the distributed tracing platform uses the following settings: Default sampling spec: sampling: options: default_strategy: type: probabilistic param: 1 3.2.5.5. Distributed tracing storage configuration options You configure storage for the Collector, Ingester, and Query services under spec.storage . Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. Table 3.5. General storage parameters used by the Red Hat OpenShift distributed tracing platform Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory or elasticsearch . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments as the data does not persist if the pod is shut down. For production environments distributed tracing platform supports Elasticsearch for persistent storage. memory Name of the secret, for example tracing-secret . N/A Configuration options that define the storage. Table 3.6. Elasticsearch index cleaner parameters Parameter Description Values Default value When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. true / false true Number of days to wait before deleting an index. Integer value 7 Defines the schedule for how often to clean the Elasticsearch index. Cron expression "55 23 * * *" 3.2.5.5.1. Auto-provisioning an Elasticsearch instance When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the storage section of the custom resource file. The Red Hat OpenShift distributed tracing platform Operator will provision Elasticsearch if the following configurations are set: spec.storage:type is set to elasticsearch spec.storage.elasticsearch.doNotProvision set to false spec.storage.options.es.server-urls is not defined, that is, there is no connection to an Elasticsearch instance that was not provisioned by the Red Hat Elasticsearch Operator. When provisioning Elasticsearch, the Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource. If you do not specify a value for spec.storage.elasticsearch.name , the Operator uses elasticsearch . Restrictions You can have only one distributed tracing platform with self-provisioned Elasticsearch instance per namespace. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. There can be only one Elasticsearch per namespace. Note If you already have installed Elasticsearch as part of OpenShift Logging, the Red Hat OpenShift distributed tracing platform Operator can use the installed OpenShift Elasticsearch Operator to provision storage. The following configuration parameters are for a self-provisioned Elasticsearch instance, that is an instance created by the Red Hat OpenShift distributed tracing platform Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under spec:storage:elasticsearch in your configuration file. Table 3.7. Elasticsearch resource configuration parameters Parameter Description Values Default value Use to specify whether or not an Elasticsearch instance should be provisioned by the Red Hat OpenShift distributed tracing platform Operator. true / false true Name of the Elasticsearch instance. The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in this parameter to connect to Elasticsearch. string elasticsearch Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as "split brain" problem can happen. Integer value. For example, Proof of concept = 1, Minimum deployment =3 3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 1 Available memory for requests, based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* 16Gi Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores, for example, 200m, 0.5, 1. For example, Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes, for example, 200Ki, 50Mi, 5Gi. For example, Proof of concept = 1Gi, Minimum deployment = 16Gi* Data replication policy defines how Elasticsearch shards are replicated across data nodes in the cluster. If not specified, the Red Hat OpenShift distributed tracing platform Operator automatically determines the most appropriate replication based on number of nodes. ZeroRedundancy (no replica shards), SingleRedundancy (one replica shard), MultipleRedundancy (each index is spread over half of the Data nodes), FullRedundancy (each index is fully replicated on every Data node in the cluster). Use to specify whether or not distributed tracing platform should use the certificate management feature of the Red Hat Elasticsearch Operator. This feature was added to logging subsystem for Red Hat OpenShift 5.2 in OpenShift Container Platform 4.7 and is the preferred setting for new Jaeger deployments. true / false true *Each Elasticsearch node can operate with a lower memory setting though this is NOT recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Production storage example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi Storage example with persistent storage: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy 1 Persistent storage configuration. In this case AWS gp2 with 5Gi size. When no value is specified, distributed tracing platform uses emptyDir . The OpenShift Elasticsearch Operator provisions PersistentVolumeClaim and PersistentVolume which are not removed with distributed tracing platform instance. You can mount the same volumes if you create a distributed tracing platform instance with the same name and namespace. 3.2.5.5.2. Connecting to an existing Elasticsearch instance You can use an existing Elasticsearch cluster for storage with distributed tracing. An existing Elasticsearch cluster, also known as an external Elasticsearch instance, is an instance that was not installed by the Red Hat OpenShift distributed tracing platform Operator or by the Red Hat Elasticsearch Operator. When you deploy a Jaeger custom resource, the Red Hat OpenShift distributed tracing platform Operator will not provision Elasticsearch if the following configurations are set: spec.storage.elasticsearch.doNotProvision set to true spec.storage.options.es.server-urls has a value spec.storage.elasticsearch.name has a value, or if the Elasticsearch instance name is elasticsearch . The Red Hat OpenShift distributed tracing platform Operator uses the Elasticsearch instance specified in spec.storage.elasticsearch.name to connect to Elasticsearch. Restrictions You cannot share or reuse a OpenShift Container Platform logging Elasticsearch instance with distributed tracing platform. The Elasticsearch cluster is meant to be dedicated for a single distributed tracing platform instance. Note Red Hat does not provide support for your external Elasticsearch instance. You can review the tested integrations matrix on the Customer Portal . The following configuration parameters are for an already existing Elasticsearch instance, also known as an external Elasticsearch instance. In this case, you specify configuration options for Elasticsearch under spec:storage:options:es in your custom resource file. Table 3.8. General ES configuration parameters Parameter Description Values Default value URL of the Elasticsearch instance. The fully-qualified domain name of the Elasticsearch server. http://elasticsearch.<namespace>.svc:9200 The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. If you set both es.max-doc-count and es.max-num-spans , Elasticsearch will use the smaller value of the two. 10000 [ Deprecated - Will be removed in a future release, use es.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. If you set both es.max-num-spans and es.max-doc-count , Elasticsearch will use the smaller value of the two. 10000 The maximum lookback for spans in Elasticsearch. 72h0m0s The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default true / false false Timeout used for queries. When set to zero there is no timeout. 0s The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es.password . The password required by Elasticsearch. See also, es.username . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Table 3.9. ES data replication parameters Parameter Description Values Default value The number of replicas per index in Elasticsearch. 1 The number of shards per index in Elasticsearch. 5 Table 3.10. ES index configuration parameters Parameter Description Values Default value Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false true Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". Table 3.11. ES bulk processor configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 1000 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 200ms The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 5000000 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 1 Table 3.12. ES TLS configuration parameters Parameter Description Values Default value Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. Table 3.13. ES archive configuration parameters Parameter Description Values Default value The number of requests that can be added to the queue before the bulk processor decides to commit updates to disk. 0 A time.Duration after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. 0s The number of bytes that the bulk requests can take up before the bulk processor decides to commit updates to disk. 0 The number of workers that are able to receive and commit bulk requests to Elasticsearch. 0 Automatically create index templates at application startup when set to true . When templates are installed manually, set to false . true / false false Enable extra storage. true / false false Optional prefix for distributed tracing platform indices. For example, setting this to "production" creates indices named "production-tracing-*". The maximum document count to return from an Elasticsearch query. This will also apply to aggregations. 0 [ Deprecated - Will be removed in a future release, use es-archive.max-doc-count instead.] The maximum number of spans to fetch at a time, per query, in Elasticsearch. 0 The maximum lookback for spans in Elasticsearch. 0s The number of replicas per index in Elasticsearch. 0 The number of shards per index in Elasticsearch. 0 The password required by Elasticsearch. See also, es.username . The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, http://localhost:9200 . The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Option to enable TLS when sniffing an Elasticsearch Cluster. The client uses the sniffing process to find all nodes automatically. Disabled by default. true / false false Timeout used for queries. When set to zero there is no timeout. 0s Path to a TLS Certification Authority (CA) file used to verify the remote servers. Will use the system truststore by default. Path to a TLS Certificate file, used to identify this process to the remote servers. Enable transport layer security (TLS) when talking to the remote servers. Disabled by default. true / false false Path to a TLS Private Key file, used to identify this process to the remote servers. Override the expected TLS server name in the certificate of the remote servers. Path to a file containing the bearer token. This flag also loads the Certification Authority (CA) file if it is specified. The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also es-archive.password . The major Elasticsearch version. If not specified, the value will be auto-detected from Elasticsearch. 0 Storage example with volume mounts apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret. External Elasticsearch example: apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public 1 URL to Elasticsearch service running in default namespace. 2 TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS. 3 Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic tracing-secret --from-literal=ES_PASSWORD=changeme --from-literal=ES_USERNAME=elastic 4 Volume mounts and volumes which are mounted into all storage components. 3.2.5.6. Managing certificates with Elasticsearch You can create and manage certificates using the Red Hat Elasticsearch Operator. Managing certificates using the Red Hat Elasticsearch Operator also lets you use a single Elasticsearch cluster with multiple Jaeger Collectors. Important Managing certificates with Elasticsearch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Starting with version 2.4, the Red Hat OpenShift distributed tracing platform Operator delegates certificate creation to the Red Hat Elasticsearch Operator by using the following annotations in the Elasticsearch custom resource: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-<shared-es-node-name>: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-<shared-es-node-name>: "system.logging.curator" Where the <shared-es-node-name> is the name of the Elasticsearch node. For example, if you create an Elasticsearch node named custom-es , your custom resource might look like the following example. Example Elasticsearch CR showing annotations apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: "true" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: "user.jaeger" logging.openshift.io/elasticsearch-cert.curator-custom-es: "system.logging.curator" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy Prerequisites OpenShift Container Platform 4.7 logging subsystem for Red Hat OpenShift 5.2 The Elasticsearch node and the Jaeger instances must be deployed in the same namespace. For example, tracing-system . You enable certificate management by setting spec.storage.elasticsearch.useCertManagement to true in the Jaeger custom resource. Example showing useCertManagement apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true The Red Hat OpenShift distributed tracing platform Operator sets the Elasticsearch custom resource name to the value of spec.storage.elasticsearch.name from the Jaeger custom resource when provisioning Elasticsearch. The certificates are provisioned by the Red Hat Elasticsearch Operator and the Red Hat OpenShift distributed tracing platform Operator injects the certificates. 3.2.5.7. Query configuration options Query is a service that retrieves traces from storage and hosts the user interface to display them. Table 3.14. Parameters used by the Red Hat OpenShift distributed tracing platform Operator to define Query Parameter Description Values Default value Specifies the number of Query replicas to create. Integer, for example, 2 Table 3.15. Configuration parameters passed to Query Parameter Description Values Default value Configuration options that define the Query service. Logging level for Query. Possible values: debug , info , warn , error , fatal , panic . The base path for all jaeger-query HTTP routes can be set to a non-root value, for example, /jaeger would cause all UI URLs to start with /jaeger . This can be useful when running jaeger-query behind a reverse proxy. /<path> Sample Query configuration apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "my-jaeger" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger 3.2.5.8. Ingester configuration options Ingester is a service that reads from a Kafka topic and writes to the Elasticsearch storage backend. If you are using the allInOne or production deployment strategies, you do not need to configure the Ingester service. Table 3.16. Jaeger parameters passed to the Ingester Parameter Description Values Configuration options that define the Ingester service. Specifies the interval, in seconds or minutes, that the Ingester must wait for a message before terminating. The deadlock interval is disabled by default (set to 0 ), to avoid terminating the Ingester when no messages arrive during system initialization. Minutes and seconds, for example, 1m0s . Default value is 0 . The topic parameter identifies the Kafka configuration used by the collector to produce the messages, and the Ingester to consume the messages. Label for the consumer. For example, jaeger-spans . Identifies the Kafka configuration used by the Ingester to consume the messages. Label for the broker, for example, my-cluster-kafka-brokers.kafka:9092 . Logging level for the Ingester. Possible values: debug , info , warn , error , fatal , dpanic , panic . Streaming Collector and Ingester example apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200 3.2.6. Injecting sidecars Red Hat OpenShift distributed tracing platform relies on a proxy sidecar within the application's pod to provide the agent. The Red Hat OpenShift distributed tracing platform Operator can inject Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually. 3.2.6.1. Automatically injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can inject Jaeger Agent sidecars into Deployment workloads. To enable automatic injection of sidecars, add the sidecar.jaegertracing.io/inject annotation set to either the string true or to the distributed tracing platform instance name that is returned by running USD oc get jaegers . When you specify true , there should be only a single distributed tracing platform instance for the same namespace as the deployment, otherwise, the Operator cannot determine which distributed tracing platform instance to use. A specific distributed tracing platform instance name on a deployment has a higher precedence than true applied on its namespace. The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single distributed tracing platform instance available in the same namespace: Automatic sidecar injection example apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: "sidecar.jaegertracing.io/inject": "true" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion 1 Set to either the string true or to the Jaeger instance name. When the sidecar is injected, the agent can then be accessed at its default location on localhost . 3.2.6.2. Manually injecting sidecars The Red Hat OpenShift distributed tracing platform Operator can only automatically inject Jaeger Agent sidecars into Deployment workloads. For controller types other than Deployments , such as StatefulSets`and `DaemonSets , you can manually define the Jaeger agent sidecar in your specification. The following snippet shows the manual definition you can include in your containers section for a Jaeger agent sidecar: Sidecar definition example for a StatefulSet apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc The agent can then be accessed at its default location on localhost. 3.3. Configuring and deploying distributed tracing data collection The Red Hat OpenShift distributed tracing data collection Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat OpenShift distributed tracing data collection resources. You can either install the default configuration or modify the file to better suit your business requirements. 3.3.1. OpenTelemetry Collector configuration options Important The Red Hat OpenShift distributed tracing data collection Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . The OpenTelemetry Collector consists of three components that access telemetry data: Receivers - A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Processors - (Optional) Processors are run on data between being received and being exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, it may be recommended that multiple processors be enabled. In addition, it is important to note that the order of processors matters. Exporters - An exporter, which can be push or pull based, is how you send data to one or more backends/destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters may support one or more data sources. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. You can define multiple instances of components in a custom resource YAML file. Once configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice you should only enable the components that you need. sample OpenTelemetry collector custom resource file apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger] Note If a component is configured, but not defined within the service section then it is not enabled. Table 3.17. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger None The oltp and jaeger receivers come with default settings, specifying the name of the receiver is enough to configure it. Processors run on data between being received and being exported. By default, no processors are enabled. None An exporter sends data to one or more backends/destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings. logging , jaeger None The jaeger exporter's endpoint must be of the form <name>-collector-headless.<namespace>.svc , with the name and namespace of the Jaeger deployment, for a secure connection to be established. Path to the CA certificate. For a client this verifies the server certificate. For a server this verifies client certificates. If empty uses system root CA. Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None 3.3.2. Validating your deployment 3.3.3. Accessing the Jaeger console To access the Jaeger console you must have either Red Hat OpenShift Service Mesh or Red Hat OpenShift distributed tracing installed, and Red Hat OpenShift distributed tracing platform installed, configured, and deployed. The installation process creates a route to access the Jaeger console. If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions. Procedure from OpenShift console Log in to the OpenShift Container Platform web console as a user with cluster-admin rights. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Networking Routes . On the Routes page, select the control plane project, for example tracing-system , from the Namespace menu. The Location column displays the linked address for each route. If necessary, use the filter to find the jaeger route. Click the route Location to launch the console. Click Log In With OpenShift . Procedure from the CLI Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 To query for details of the route using the command line, enter the following command. In this example, tracing-system is the control plane namespace. USD export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}') Launch a browser and navigate to https://<JAEGER_URL> , where <JAEGER_URL> is the route that you discovered in the step. Log in using the same user name and password that you use to access the OpenShift Container Platform console. If you have added services to the service mesh and have generated traces, you can use the filters and Find Traces button to search your trace data. If you are validating the console installation, there is no trace data to display. 3.4. Upgrading distributed tracing Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in OpenShift Container Platform. OLM queries for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handles upgrades, see the Operator Lifecycle Manager documentation. During an update, the Red Hat OpenShift distributed tracing Operators upgrade the managed distributed tracing instances to the version associated with the Operator. Whenever a new version of the Red Hat OpenShift distributed tracing platform Operator is installed, all the distributed tracing platform application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running distributed tracing platform instances and upgrades them to 1.11 as well. For specific instructions on how to update the OpenShift Elasticsearch Operator, see Updating OpenShift Logging . 3.4.1. Changing the Operator channel for 2.0 Red Hat OpenShift distributed tracing 2.0.0 made the following changes: Renamed the Red Hat OpenShift Jaeger Operator to the Red Hat OpenShift distributed tracing platform Operator. Stopped support for individual release channels. Going forward, the Red Hat OpenShift distributed tracing platform Operator will only support the stable Operator channel. Maintenance channels, for example 1.24-stable , will no longer be supported by future Operators. As part of the update to version 2.0, you must update your OpenShift Elasticsearch and Red Hat OpenShift distributed tracing platform Operator subscriptions. Prerequisites The OpenShift Container Platform version is 4.6 or later. You have updated the OpenShift Elasticsearch Operator. You have backed up the Jaeger custom resource file. An account with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Important If you have not already updated your OpenShift Elasticsearch Operator as described in Updating OpenShift Logging complete that update before updating your Red Hat OpenShift distributed tracing platform Operator. For instructions on how to update the Operator channel, see Upgrading installed Operators . 3.5. Removing distributed tracing The steps for removing Red Hat OpenShift distributed tracing from an OpenShift Container Platform cluster are as follows: Shut down any Red Hat OpenShift distributed tracing pods. Remove any Red Hat OpenShift distributed tracing instances. Remove the Red Hat OpenShift distributed tracing platform Operator. Remove the Red Hat OpenShift distributed tracing data collection Operator. 3.5.1. Removing a Red Hat OpenShift distributed tracing platform instance using the web console Note When deleting an instance that uses the in-memory storage, all data is permanently lost. Data stored in a persistent storage such as Elasticsearch is not be deleted when a Red Hat OpenShift distributed tracing platform instance is removed. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Select the name of the project where the Operators are installed from the Project menu, for example, openshift-operators . Click the Red Hat OpenShift distributed tracing platform Operator. Click the Jaeger tab. Click the Options menu to the instance you want to delete and select Delete Jaeger . In the confirmation message, click Delete . 3.5.2. Removing a Red Hat OpenShift distributed tracing platform instance from the CLI Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> To display the distributed tracing platform instances run the command: USD oc get deployments -n <jaeger-project> For example, USD oc get deployments -n openshift-operators The names of Operators have the suffix -operator . The following example shows two Red Hat OpenShift distributed tracing platform Operators and four distributed tracing platform instances: USD oc get deployments -n openshift-operators You should see output similar to the following: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m To remove an instance of distributed tracing platform, run the following command: USD oc delete jaeger <deployment-name> -n <jaeger-project> For example: USD oc delete jaeger tracing2 -n openshift-operators To verify the deletion, run the oc get deployments command again: USD oc get deployments -n <jaeger-project> For example: USD oc get deployments -n openshift-operators You should see generated output that is similar to the following example: NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s 3.5.3. Removing the Red Hat OpenShift distributed tracing Operators Procedure Follow the instructions for Deleting Operators from a cluster . Remove the Red Hat OpenShift distributed tracing platform Operator. After the Red Hat OpenShift distributed tracing platform Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator.
[ "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "oc create -n tracing-system -f jaeger.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "oc create -n tracing-system -f jaeger-production.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans #Note: If brokers are not defined,AMQStreams 1.4.0+ will self-provision Kafka. brokers: my-cluster-kafka-brokers.kafka:9092 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443", "oc new-project tracing-system", "oc create -n tracing-system -f jaeger-streaming.yaml", "oc get pods -n tracing-system -w", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory", "collector: replicas:", "spec: collector: options: {}", "options: collector: num-workers:", "options: collector: queue-size:", "options: kafka: producer: topic: jaeger-spans", "options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092", "options: log-level:", "spec: sampling: options: {} default_strategy: service_strategy:", "default_strategy: type: service_strategy: type:", "default_strategy: param: service_strategy: param:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5", "spec: sampling: options: default_strategy: type: probabilistic param: 1", "spec: storage: type:", "storage: secretname:", "storage: options: {}", "storage: esIndexCleaner: enabled:", "storage: esIndexCleaner: numberOfDays:", "storage: esIndexCleaner: schedule:", "elasticsearch: properties: doNotProvision:", "elasticsearch: properties: name:", "elasticsearch: nodeCount:", "elasticsearch: resources: requests: cpu:", "elasticsearch: resources: requests: memory:", "elasticsearch: resources: limits: cpu:", "elasticsearch: resources: limits: memory:", "elasticsearch: redundancyPolicy:", "elasticsearch: useCertManagement:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy", "es: server-urls:", "es: max-doc-count:", "es: max-num-spans:", "es: max-span-age:", "es: sniffer:", "es: sniffer-tls-enabled:", "es: timeout:", "es: username:", "es: password:", "es: version:", "es: num-replicas:", "es: num-shards:", "es: create-index-templates:", "es: index-prefix:", "es: bulk: actions:", "es: bulk: flush-interval:", "es: bulk: size:", "es: bulk: workers:", "es: tls: ca:", "es: tls: cert:", "es: tls: enabled:", "es: tls: key:", "es: tls: server-name:", "es: token-file:", "es-archive: bulk: actions:", "es-archive: bulk: flush-interval:", "es-archive: bulk: size:", "es-archive: bulk: workers:", "es-archive: create-index-templates:", "es-archive: enabled:", "es-archive: index-prefix:", "es-archive: max-doc-count:", "es-archive: max-num-spans:", "es-archive: max-span-age:", "es-archive: num-replicas:", "es-archive: num-shards:", "es-archive: password:", "es-archive: server-urls:", "es-archive: sniffer:", "es-archive: sniffer-tls-enabled:", "es-archive: timeout:", "es-archive: tls: ca:", "es-archive: tls: cert:", "es-archive: tls: enabled:", "es-archive: tls: key:", "es-archive: tls: server-name:", "es-archive: token-file:", "es-archive: username:", "es-archive: version:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public", "apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true", "spec: query: replicas:", "spec: query: options: {}", "options: log-level:", "options: query: base-path:", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger", "spec: ingester: options: {}", "options: deadlockInterval:", "options: kafka: consumer: topic:", "options: kafka: consumer: brokers:", "options: log-level:", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200", "apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion", "apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc", "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment config: | receivers: otlp: protocols: grpc: http: processors: exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" service: pipelines: traces: receivers: [otlp] processors: [] exporters: [jaeger]", "receivers:", "receivers: otlp:", "processors:", "exporters:", "exporters: jaeger: endpoint:", "exporters: jaeger: tls: ca_file:", "service: pipelines:", "service: pipelines: traces: receivers:", "service: pipelines: traces: processors:", "service: pipelines: traces: exporters:", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')", "oc login --username=<NAMEOFUSER>", "oc get deployments -n <jaeger-project>", "oc get deployments -n openshift-operators", "oc get deployments -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m", "oc delete jaeger <deployment-name> -n <jaeger-project>", "oc delete jaeger tracing2 -n openshift-operators", "oc get deployments -n <jaeger-project>", "oc get deployments -n openshift-operators", "NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/distributed_tracing/distributed-tracing-installation
2.3.2. Striped Logical Volumes
2.3.2. Striped Logical Volumes When you write data to an LVM logical volume, the file system lays the data out across the underlying physical volumes. You can control the way the data is written to the physical volumes by creating a striped logical volume. For large sequential reads and writes, this can improve the efficiency of the data I/O. Striping enhances performance by writing data to a predetermined number of physical volumes in round-round fashion. With striping, I/O can be done in parallel. In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe. The following illustration shows data being striped across three physical volumes. In this figure: the first stripe of data is written to PV1 the second stripe of data is written to PV2 the third stripe of data is written to PV3 the fourth stripe of data is written to PV1 In a striped logical volume, the size of the stripe cannnot exceed the size of an extent. Figure 2.5. Striping Data Across Three PVs Striped logical volumes can be extended by concatenating another set of devices onto the end of the first set. In order extend a striped logical volume, however, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For more information on extending a striped volume, see Section 4.4.9, "Extending a Striped Volume" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/striped_volumes
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices
Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Although, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation, this deployment method is not supported on ROSA. Note Only internal OpenShift Data Foundation clusters are supported on ROSA. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub for ROSA with hosted control planes (HCP). Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the storage namespace: Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Fill in role ARN . For instruction to create a Amazon resource name (ARN), see Creating an AWS role using a script . Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Select a Namespace . Note openshift-storage Namespace is not recommended for ROSA deployments. Use a user defined namespace for this deployment. Avoid using "redhat" or "openshift" prefixes in namespaces. Important This guide uses <storage_namespace> as an example namespace. Replace <storage_namespace> with your defined namespace in later steps. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Manual updates strategy is recommended for ROSA with hosted control planes. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step, and <storage_namespace> is the namespace where ODF operator and StorageSystem were created. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation:
[ "oc annotate namespace storage-namespace openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n <storage-namespace> create serviceaccount <serviceaccount_name>", "oc -n <storage-namespace> create serviceaccount odf-vault-auth", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: <storage-namespace> annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h", "patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/deploy-using-dynamic-storage-devices-rosa
Chapter 5. Installer and image creation
Chapter 5. Installer and image creation 5.1. Add-ons 5.1.1. OSCAP The Open Security Content Automation Protocol (OSCAP) add-on is enabled by default in RHEL 8. 5.1.2. Kdump The Kdump add-on adds support for configuring kernel crash dumping during installation. This add-on has full support in Kickstart (using the %addon com_redhat_kdump command and its options), and is fully integrated as an additional window in the graphical and text-based user interfaces. 5.2. Installer networking A new network device naming scheme that generates network interface names based on a user-defined prefix is available in Red Hat Enterprise Linux 8. The net.ifnames.prefix boot option allows the device naming scheme to be used by the installation program and the installed system. Additional resources For more information, see RHEL-8 new custom NIC names helper or Customizing the prefix for Ethernet interfaces during installation . 5.3. Installation images and packages 5.3.1. Ability to register your system, attach RHEL subscriptions, and install from the Red Hat CDN Since Red Hat Enterprise Linux 8.2, you can register your system, attach RHEL subscriptions, and install from the Red Hat Content Delivery Network (CDN) before package installation. Interactive GUI installations, as well as automated Kickstart installations, support this feature. For more information, see the RHEL 8.2 Release Notes document. 5.3.2. Ability to register your system to Red Hat Insights during installation Red Hat Insights is a managed service that gathers and analyzes platform and application data to predict risk, recommend actions, and track costs. Insights alerts you about warnings or optimizations that are relevant to several operational areas: system availability (including potential outages), security (for example, a new CVE is discovered for your systems), and business (such as overspending). Insights is included as part of your Red Hat subscription and is accessible through the Red Hat Hybrid Cloud Console. See also the Red Hat Insights documentation . Since Red Hat Enterprise Linux 8.2, you can register your system to Red Hat Insights during installation. Interactive GUI installations, as well as automated Kickstart installations, support this feature. For more information, see the RHEL 8.2 Release Notes document. 5.3.3. Unified ISO In Red Hat Enterprise Linux 8, a unified ISO automatically loads the BaseOS and AppStream installation source repositories. This feature works for the first base repository that is loaded during installation. For example, if you boot the installation with no repository configured and have the unified ISO as the base repository in the graphical user interface (GUI), or if you boot the installation using the inst.repo= option that points to the unified ISO. As a result, the AppStream repository is enabled under the Additional Repositories section of the Installation Source GUI window. You cannot remove the AppStream repository or change its settings but you can disable it in Installation Source . This feature does not work if you boot the installation using a different base repository and then change it to the unified ISO. If you do that, the base repository is replaced. However, the AppStream repository is not replaced and points to the original file. 5.3.4. Stage2 image In Red Hat Enterprise Linux 8, multiple network locations of stage2 or Kickstart files can be specified to prevent installation failure. This update enables the specification of multiple inst.stage2 and inst.ks boot options with network locations of stage2 and a Kickstart file. This avoids the situation in which the requested files cannot be reached and the installation fails because the contacted server with the stage2 or the Kickstart file is inaccessible. With this new update, the installation failure can be avoided if multiple locations are specified. If all the defined locations are URLs, namely HTTP , HTTPS , or FTP , they will be tried sequentially until the requested file is fetched successfully. If there is a location that is not a URL, only the last specified location is tried. The remaining locations are ignored. 5.3.5. inst.addrepo parameter Previously, you could only specify a base repository from the kernel boot parameters. In Red Hat Enterprise Linux 8, a new kernel parameter, inst.addrepo=<name>,<url> , allows you to specify an additional repository during installation. This parameter has two mandatory values: the name of the repository and the URL that points to the repository. For more information, see the inst-addrepo usage . 5.3.6. Installation from an expanded ISO Red Hat Enterprise Linux 8 supports installing from a repository on a local hard drive. Previously, the only installation method from a hard drive was using an ISO image as the installation source. However, the Red Hat Enterprise Linux 8 ISO image might be too big for some file systems; for example, the FAT32 file system cannot store files larger than 4 GiB. In Red Hat Enterprise Linux 8, you can enable installation from a repository on a local hard drive; you only need to specify the directory instead of the ISO image. For example: inst.repo=hd:<device>:<path to the repository> . For more information about the Red Hat Enterprise Linux 8 BaseOS and AppStream repositories, see the Repositories section of this document. 5.4. Installer Graphical User Interface 5.4.1. The Installation Summary window The Installation Summary window of the Red Hat Enterprise Linux 8 graphical installation has been updated to a new three-column layout that provides improved organization of graphical installation settings. 5.5. System Purpose new in RHEL 5.5.1. System Purpose support in the graphical installation Previously, the Red Hat Enterprise Linux installation program did not provide system purpose information to Subscription Manager. In Red Hat Enterprise Linux 8, you can set the intended purpose of the system during a graphical installation by using the System Purpose window, or in a Kickstart configuration file by using the syspurpose command. When you set a system's purpose, the entitlement server receives information that helps auto-attach a subscription that satisfies the intended use of the system. 5.5.2. System Purpose support in Pykickstart Previously, it was not possible for the pykickstart library to provide system purpose information to Subscription Manager. In Red Hat Enterprise Linux 8, pykickstart parses the new syspurpose command and records the intended purpose of the system during automated and partially-automated installation. The information is then passed to the installation program, saved on the newly-installed system, and available for Subscription Manager when subscribing the system. 5.6. Installer module support 5.6.1. Installing modules using Kickstart In Red Hat Enterprise Linux 8, the installation program has been extended to handle all modular features. Kickstart scripts can now enable module and stream combinations, install module profiles, and install modular packages. 5.7. Kickstart changes The following sections describe the changes in Kickstart commands and options in Red Hat Enterprise Linux 8. auth or authconfig is deprecated in RHEL 8 The auth or authconfig Kickstart command is deprecated in Red Hat Enterprise Linux 8 because the authconfig tool and package have been removed. Similarly to authconfig commands issued on command line, authconfig commands in Kickstart scripts now use the authselect-compat tool to run the new authselect tool. For a description of this compatibility layer and its known issues, see the manual page authselect-migration(7) . The installation program will automatically detect use of the deprecated commands and install on the system the authselect-compat package to provide the compatibility layer. Kickstart no longer supports Btrfs The Btrfs file system is not supported from Red Hat Enterprise Linux 8. As a result, the Graphical User Interface (GUI) and the Kickstart commands no longer support Btrfs. Using Kickstart files from RHEL releases If you are using Kickstart files from RHEL releases, see the Repositories section of the Considerations in adopting RHEL 8 document for more information about the Red Hat Enterprise Linux 8 BaseOS and AppStream repositories. 5.7.1. Deprecated Kickstart commands and options The following Kickstart commands and options have been deprecated in Red Hat Enterprise Linux 8. Where only specific options are listed, the base command and its other options are still available and not deprecated. auth or authconfig - use authselect instead device deviceprobe dmraid install - use the subcommands or methods directly as commands multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec syspurpose - use subscription-manager syspurpose instead Except the auth or authconfig command, using the commands in Kickstart files prints a warning in the logs. You can turn the deprecated command warnings into errors with the inst.ksstrict boot option, except for the auth or authconfig command. 5.7.2. Removed Kickstart commands and options The following Kickstart commands and options have been completely removed in Red Hat Enterprise Linux 8. Using them in Kickstart files will cause an error. device deviceprobe dmraid install - use the subcommands or methods directly as commands multipath bootloader --upgrade ignoredisk --interactive partition --active harddrive --biospart upgrade (This command had already previously been deprecated.) btrfs part/partition btrfs part --fstype btrfs or partition --fstype btrfs logvol --fstype btrfs raid --fstype btrfs unsupported_hardware Where only specific options and values are listed, the base command and its other options are still available and not removed. 5.8. Image creation 5.8.1. Custom system image creation with Image Builder The Image Builder tool enables users to create customized RHEL images. As of Red Hat Enterprise Linux 8.3, Image Builder runs as a system service osbuild-composer package. With Image Builder, users can create custom system images which include additional packages. Image Builder functionality can be accessed through: a graphical user interface in the web console a command-line interface in the composer-cli tool. Image Builder output formats include, among others: TAR archive qcow2 file for direct use with a virtual machine or OpenStack QEMU QCOW2 Image cloud images for Azure, VMWare and AWS To learn more about Image Builder, see the documentation title Composing a customized RHEL system image .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/installer-and-image-creation_considerations-in-adopting-rhel-8
Chapter 12. Upgrading
Chapter 12. Upgrading For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator's new version. 12.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/dist-tracing-otel-updating
Chapter 5. Running .NET 9.0 applications in containers
Chapter 5. Running .NET 9.0 applications in containers Use the ubi8/dotnet-90-runtime image to run a .NET application inside a Linux container. The following example uses Podman. Procedure Create a new MVC project in a directory called mvc_runtime_example : Publish the project: Run your image: View the application running in the container:
[ "dotnet new mvc --output mvc_runtime_example", "dotnet publish mvc_runtime_example -f net9.0 /p:PublishProfile=DefaultContainer /p:ContainerBaseImage=registry.access.redhat.com/ubi8/dotnet-90-runtime:latest", "podman run -rm -p8080:8080 mvc_runtime_example", "xdg-open http://127.0.0.1:8080" ]
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_9/running-apps-in-containers-using-dotnet_assembly_publishing-apps-using-dotnet
Chapter 3. Installing and preparing the Operators
Chapter 3. Installing and preparing the Operators You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator ( openstack-operator ) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster. 3.1. Prerequisites An operational RHOCP cluster, version 4.16. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . The oc command line tool is installed on your workstation. You are logged in to the RHOCP cluster as a user with cluster-admin privileges. 3.2. Installing the OpenStack Operator You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator ( openstack-operator ) on your RHOCP cluster. Procedure Log in to the RHOCP web console as a user with cluster-admin permissions. Select Operators OperatorHub . In the Filter by keyword field, type OpenStack . Click the OpenStack Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list. Click Install to make the Operator available to the openstack-operators namespace. The Operators are deployed and ready when the Status of the OpenStack Operator is Succeeded .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_dynamic_routing_environment/assembly_installing-and-preparing-the-operators
Chapter 7. Logging in to the Identity Management Web UI using one time passwords
Chapter 7. Logging in to the Identity Management Web UI using one time passwords Access to IdM Web UI can be secured using several methods. The basic one is password authentication. To increase the security of password authentication, you can add a second step and require automatically generated one-time passwords (OTPs). The most common usage is to combine password connected with the user account and a time limited one time password generated by a hardware or software token. The following sections help you to: Understand how the OTP authentication works in IdM. Configure OTP authentication on the IdM server. Configure a RADIUS server for OTP validation in IdM. Create OTP tokens and synchronize them with the FreeOTP app in your phone. Authenticate to the IdM Web UI with the combination of user password and one time password. Re-synchronize tokens in the Web UI. Retrieve an IdM ticket-granting ticket as an OTP or RADIUS user Enforce OTP usage for all LDAP clients 7.1. Prerequisites Accessing the IdM Web UI in a web browser 7.2. One time password (OTP) authentication in Identity Management One-time passwords bring an additional step to your authentication security. The authentication uses your password + an automatically generated one time password. To generate one time passwords, you can use a hardware or software token. IdM supports both software and hardware tokens. Identity Management supports the following two standard OTP mechanisms: The HMAC-Based One-Time Password (HOTP) algorithm is based on a counter. HMAC stands for Hashed Message Authentication Code. The Time-Based One-Time Password (TOTP) algorithm is an extension of HOTP to support time-based moving factor. Important IdM does not support OTP logins for Active Directory trust users. 7.3. Enabling the one-time password in the Web UI Identity Management (IdM) administrators can enable two-factor authentication (2FA) for IdM users either globally or individually. The user enters the one-time password (OTP) after their regular password on the command line or in the dedicated field in the Web UI login dialog, with no space between these passwords. Enabling 2FA is not the same as enforcing it. If you use logins based on LDAP-binds, IdM users can still authenticate by entering a password only. However, if you use krb5 -based logins, the 2FA is enforced. Note that there is an option to enforce 2FA for LDAP-binds by enforcing OTP usage for all LDAP clients. For more information, see Enforcing OTP usage for all LDAP clients . In a future release, Red Hat plans to provide a configuration option for administrators to select one of the following: Allow users to set their own tokens. In this case, LDAP-binds are still not going to enforce 2FA though krb5 -based logins are. Not allow users to set their own tokens. In this case, 2FA is going to be enforced in both LDAP-binds and krb5 -based logins. Complete this procedure to use the IdM Web UI to enable 2FA for the individual example.user IdM user. Prerequisites Administration privileges Procedure Log in to the IdM Web UI with IdM admin privileges. Open the Identity Users Active users tab. Select example.user to open the user settings. In the User authentication types , select Two factor authentication (password + OTP) . Click Save . At this point, the OTP authentication is enabled for the IdM user. Now you or example.user must assign a new token ID to the example.user account. 7.4. Configuring a RADIUS server for OTP validation in IdM To enable the migration of a large deployment from a proprietary one-time password (OTP) solution to the Identity Management (IdM)-native OTP solution, IdM offers a way to offload OTP validation to a third-party RADIUS server for a subset of users. The administrator creates a set of RADIUS proxies where each proxy can only reference a single RADIUS server. If more than one server needs to be addressed, it is recommended to create a virtual IP solution that points to multiple RADIUS servers. Such a solution must be built outside of RHEL IdM with the help of the keepalived daemon, for example. The administrator then assigns one of these proxy sets to a user. As long as the user has a RADIUS proxy set assigned, IdM bypasses all other authentication mechanisms. Note IdM does not provide any token management or synchronization support for tokens in the third-party system. Complete the procedure to configure a RADIUS server for OTP validation and to add a user to the proxy server: Prerequisites The radius user authentication method is enabled. See Enabling the one-time password in the Web UI for details. Procedure Add a RADIUS proxy: The command prompts you for inserting the required information. The configuration of the RADIUS proxy requires the use of a common secret between the client and the server to wrap credentials. Specify this secret in the --secret parameter. Assign a user to the added proxy: If required, configure the user name to be sent to RADIUS: As a result, the RADIUS proxy server starts to process the user OTP authentication. When the user is ready to be migrated to the IdM native OTP system, you can simply remove the RADIUS proxy assignment for the user. 7.4.1. Changing the timeout value of a KDC when running a RADIUS server in a slow network In certain situations, such as running a RADIUS proxy in a slow network, the Identity Management (IdM) Kerberos Distribution Center (KDC) closes the connection before the RADIUS server responds because the connection timed out while waiting for the user to enter the token. To change the timeout settings of the KDC: Change the value of the timeout parameter in the [otp] section in the /var/kerberos/krb5kdc/kdc.conf file. For example, to set the timeout to 120 seconds: Restart the krb5kdc service: Additional resources How to configure FreeRADIUS authentication in FIPS mode (Red Hat Knowledgebase) 7.5. Adding OTP tokens in the Web UI The following section helps you to add token to the IdM Web UI and to your software token generator. Prerequisites Active user account on the IdM server. Administrator has enabled OTP for the particular user account in the IdM Web UI. A software device generating OTP tokens, for example FreeOTP. Procedure Log in to the IdM Web UI with your user name and password. To create the token in your mobile phone, open the Authentication OTP Tokens tab. Click Add . In the Add OTP token dialog box, leave everything unfilled and click Add . At this stage, the IdM server creates a token with default parameters at the server and opens a page with a QR code. Copy the QR code into your mobile phone. Click OK to close the QR code. Now you can generate one time passwords and log in with them to the IdM Web UI. 7.6. Logging into the Web UI with a one time password Follow this procedure to login for the first time into the IdM Web UI using a one time password (OTP). Prerequisites OTP configuration enabled on the Identity Management server for the user account you are using for the OTP authentication. Administrators as well as users themselves can enable OTP. To enable the OTP configuration, see Enabling the one time password in the Web UI . A hardware or software device generating OTP tokens configured. Procedure In the Identity Management login screen, enter your user name or a user name of the IdM server administrator account. Add the password for the user name entered above. Generate a one time password on your device. Enter the one time password right after the password (without space). Click Log in . If the authentication fails, synchronize OTP tokens. If your CA uses a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. If the IdM Web UI does not open, verify the DNS configuration of your Identity Management server. After successful login, the IdM Web UI appears. 7.7. Synchronizing OTP tokens using the Web UI If the login with OTP (One Time Password) fails, OTP tokens are not synchronized correctly. The following text describes token re-synchronization. Prerequisites A login screen opened. A device generating OTP tokens configured. Procedure On the IdM Web UI login screen, click Sync OTP Token . In the login screen, enter your username and the Identity Management password. Generate one time password and enter it in the First OTP field. Generate another one time password and enter it in the Second OTP field. Optional: Enter the token ID. Click Sync OTP Token . After the successful synchronization, you can log in to the IdM server. 7.8. Changing expired passwords Administrators of Identity Management can enforce you having to change your password at the login. It means that you cannot successfully log in to the IdM Web UI until you change the password. Password expiration can happen during your first login to the Web UI. If the expiration password dialog appears, follow the instructions in the procedure. Prerequisites A login screen opened. Active account to the IdM server. Procedure In the password expiration login screen, enter the user name. Add the password for the user name entered above. In the OTP field, generate a one time password, if you use the one time password authentication. If you do not have enabled the OTP authentication, leave the field empty. Enter the new password twice for verification. Click Reset Password . After the successful password change, the usual login dialog displays. Log in with the new password. 7.9. Retrieving an IdM ticket-granting ticket as an OTP or RADIUS user To retrieve a Kerberos ticket-granting ticket (TGT) as an OTP user, request an anonymous Kerberos ticket and enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 8.7 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have enabled OTP for the required user account. Procedure Initialize the credentials cache by running the following command: Note that this command creates the armor.ccache file that you need to point to whenever you request a new Kerberos ticket. Request a Kerberos ticket by running the command: Verification Display your Kerberos ticket information: The pa_type = 141 indicates OTP/RADIUS authentication. 7.10. Enforcing OTP usage for all LDAP clients In RHEL IdM, you can set the default behavior for LDAP server authentication of user accounts with two-factor (OTP) authentication configured. If OTP is enforced, LDAP clients cannot authenticate against an LDAP server using single-factor authentication (a password) for users that have associated OTP tokens. RHEL IdM already enforces this method through the Kerberos backend by using a special LDAP control with OID 2.16.840.1.113730.3.8.10.7 without any data. Procedure To enforce OTP usage for all LDAP clients, use the following command: To change back to the OTP behavior for all LDAP clients, use the following command:
[ "ipa radiusproxy-add proxy_name --secret secret", "ipa user-mod radiususer --radius=proxy_name", "ipa user-mod radiususer --radius-username=radius_user", "[otp] DEFAULT = { timeout = 120 }", "systemctl restart krb5kdc", "kinit -n @IDM.EXAMPLE.COM -c FILE:armor.ccache", "kinit -T FILE:armor.ccache <username>@IDM.EXAMPLE.COM Enter your OTP Token Value.", "klist -C Ticket cache: KCM:0:58420 Default principal: <username>@IDM.EXAMPLE.COM Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 141", "ipa config-mod --addattr ipaconfigstring=EnforceLDAPOTP", "ipa config-mod --delattr ipaconfigstring=EnforceLDAPOTP" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/accessing_identity_management_services/logging-in-to-the-ipa-web-ui-using-one-time-passwords_accessing-idm-services
14.3. DHCP Relay Agent
14.3. DHCP Relay Agent The DHCP Relay Agent ( dhcrelay ) enables the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified when the DHCP Relay Agent is started. When a DHCP server returns a reply, the reply is broadcast or unicast on the network that sent the original request. The DHCP Relay Agent for IPv4 , dhcrelay , listens for DHCPv4 and BOOTP requests on all interfaces unless the interfaces are specified in /etc/sysconfig/dhcrelay with the INTERFACES directive. See Section 14.3.1, "Configure dhcrelay as a DHCPv4 and BOOTP relay agent" . The DHCP Relay Agent for IPv6 , dhcrelay6 , does not have this default behavior and interfaces to listen for DHCPv6 requests must be specified. See Section 14.3.2, "Configure dhcrelay as a DHCPv6 relay agent" . dhcrelay can either be run as a DHCPv4 and BOOTP relay agent (by default) or as a DHCPv6 relay agent (with -6 argument). To see the usage message, issue the command dhcrelay -h . 14.3.1. Configure dhcrelay as a DHCPv4 and BOOTP relay agent To run dhcrelay in DHCPv4 and BOOTP mode specify the servers to which the requests should be forwarded to. Copy and then edit the dhcrelay.service file as the root user: Edit the ExecStart option under section [Service] and add one or more server IPv4 addresses to the end of the line, for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid 192.168.1.1 If you also want to specify interfaces where the DHCP Relay Agent listens for DHCP requests, add them to the ExecStart option with -i argument (otherwise it will listen on all interfaces), for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid 192.168.1.1 -i em1 For other options see the dhcrelay(8) man page. To activate the changes made, as the root user, restart the service: 14.3.2. Configure dhcrelay as a DHCPv6 relay agent To run dhcrelay in DHCPv6 mode add the -6 argument and specify the " lower interface " (on which queries will be received from clients or from other relay agents) and the " upper interface " (to which queries from clients and other relay agents should be forwarded). Copy dhcrelay.service to dhcrelay6.service and edit it as the root user: Edit the ExecStart option under section [Service] add -6 argument and add the " lower interface " and " upper interface " interface, for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid -6 -l em1 -u em2 For other options see the dhcrelay(8) man page. To activate the changes made, as the root user, restart the service:
[ "~]# cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/ ~]# vi /etc/systemd/system/dhcrelay.service", "~]# systemctl --system daemon-reload ~]# systemctl restart dhcrelay", "~]# cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/dhcrelay6.service ~]# vi /etc/systemd/system/dhcrelay6.service", "~]# systemctl --system daemon-reload ~]# systemctl restart dhcrelay6" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/dhcp-relay-agent
Red Hat Ansible Automation Platform automation mesh guide for VM-based installations
Red Hat Ansible Automation Platform automation mesh guide for VM-based installations Red Hat Ansible Automation Platform 2.4 Automate at scale in a cloud-native way Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/index
Chapter 17. complete.adoc
Chapter 17. complete.adoc This chapter describes the commands under the complete.adoc command. 17.1. complete print bash completion command Usage: Table 17.1. Command arguments Value Summary -h, --help Show this help message and exit --name <command_name> Command name to support with command completion --shell <shell> Shell being used. use none for data only (default: bash)
[ "openstack complete [-h] [--name <command_name>] [--shell <shell>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/complete_adoc
Installing and Configuring Central Authentication for the Ansible Automation Platform
Installing and Configuring Central Authentication for the Ansible Automation Platform Red Hat Ansible Automation Platform 2.3 Enable central authentication functions for your Ansible Automation Platform Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/index
Chapter 18. Rebalancing clusters using Cruise Control
Chapter 18. Rebalancing clusters using Cruise Control Cruise Control is an open source system that supports the following Kafka operations: Monitoring cluster workload Rebalancing a cluster based on predefined constraints The operations help with running a more balanced Kafka cluster that uses broker pods more efficiently. A typical cluster can become unevenly loaded over time. Partitions that handle large amounts of message traffic might not be evenly distributed across the available brokers. To rebalance the cluster, administrators must monitor the load on brokers and manually reassign busy partitions to brokers with spare capacity. Cruise Control automates the cluster rebalancing process. It constructs a workload model of resource utilization for the cluster- based on CPU, disk, and network load- and generates optimization proposals (that you can approve or reject) for more balanced partition assignments. A set of configurable optimization goals is used to calculate these proposals. You can generate optimization proposals in specific modes. The default full mode rebalances partitions across all brokers. You can also use the add-brokers and remove-brokers modes to accommodate changes when scaling a cluster up or down. When you approve an optimization proposal, Cruise Control applies it to your Kafka cluster. You configure and generate optimization proposals using a KafkaRebalance resource. You can configure the resource using an annotation so that optimization proposals are approved automatically or manually. Note AMQ Streams provides example configuration files for Cruise Control . 18.1. Cruise Control components and features Cruise Control consists of four main components- the Load Monitor, the Analyzer, the Anomaly Detector, and the Executor- and a REST API for client interactions. AMQ Streams utilizes the REST API to support the following Cruise Control features: Generating optimization proposals from optimization goals. Rebalancing a Kafka cluster based on an optimization proposal. Optimization goals An optimization goal describes a specific objective to achieve from a rebalance. For example, a goal might be to distribute topic replicas across brokers more evenly. You can change what goals to include through configuration. A goal is defined as a hard goal or soft goal. You can add hard goals through Cruise Control deployment configuration. You also have main, default, and user-provided goals that fit into each of these categories. Hard goals are preset and must be satisfied for an optimization proposal to be successful. Soft goals do not need to be satisfied for an optimization proposal to be successful. They can be set aside if it means that all hard goals are met. Main goals are inherited from Cruise Control. Some are preset as hard goals. Main goals are used in optimization proposals by default. Default goals are the same as the main goals by default. You can specify your own set of default goals. User-provided goals are a subset of default goals that are configured for generating a specific optimization proposal. Optimization proposals Optimization proposals comprise the goals you want to achieve from a rebalance. You generate an optimization proposal to create a summary of proposed changes and the results that are possible with the rebalance. The goals are assessed in a specific order of priority. You can then choose to approve or reject the proposal. You can reject the proposal to run it again with an adjusted set of goals. You can generate an optimization proposal in one of three modes. full is the default mode and runs a full rebalance. add-brokers is the mode you use after adding brokers when scaling up a Kafka cluster. remove-brokers is the mode you use before removing brokers when scaling down a Kafka cluster. Other Cruise Control features are not currently supported, including self healing, notifications, write-your-own goals, and changing the topic replication factor. Additional resources Cruise Control documentation 18.2. Optimization goals overview Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals , which you can approve or reject. 18.2.1. Goals order of priority AMQ Streams supports most of the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity goals Disk capacity Network inbound capacity Network outbound capacity CPU capacity Replica distribution Potential network output Resource distribution goals Disk utilization distribution Network inbound utilization distribution Network outbound utilization distribution CPU utilization distribution Leader bytes-in rate distribution Topic replica distribution Leader replica distribution Preferred leader election Intra-broker disk capacity Intra-broker disk usage distribution For more information on each optimization goal, see Goals in the Cruise Control Wiki. Note "Write your own" goals and Kafka assigner goals are not yet supported. 18.2.2. Goals configuration in AMQ Streams custom resources You configure optimization goals in Kafka and KafkaRebalance custom resources. Cruise Control has configurations for hard optimization goals that must be satisfied, as well as main, default, and user-provided optimization goals. You can specify optimization goals in the following configuration: Main goals - Kafka.spec.cruiseControl.config.goals Hard goals - Kafka.spec.cruiseControl.config.hard.goals Default goals - Kafka.spec.cruiseControl.config.default.goals User-provided goals - KafkaRebalance.spec.goals Note Resource distribution goals are subject to capacity limits on broker resources. 18.2.3. Hard and soft optimization goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. An optimization proposal that violates one or more soft goals, but satisfies all hard goals, is valid. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by Cruise Control and not sent to the user for approval. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. In Cruise Control, the following main optimization goals are preset as hard goals: You configure hard goals in the Cruise Control deployment configuration, by editing the hard.goals property in Kafka.spec.cruiseControl.config . To inherit the preset hard goals from Cruise Control, do not specify the hard.goals property in Kafka.spec.cruiseControl.config To change the preset hard goals, specify the desired goals in the hard.goals property, using their fully-qualified domain names. Example Kafka configuration for hard optimization goals apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal # ... Increasing the number of configured hard goals will reduce the likelihood of Cruise Control generating valid optimization proposals. If skipHardGoalCheck: true is specified in the KafkaRebalance custom resource, Cruise Control does not check that the list of user-provided optimization goals (in KafkaRebalance.spec.goals ) contains all the configured hard goals ( hard.goals ). Therefore, if some, but not all, of the user-provided optimization goals are in the hard.goals list, Cruise Control will still treat them as hard goals even if skipHardGoalCheck: true is specified. 18.2.4. Main optimization goals The main optimization goals are available to all users. Goals that are not listed in the main optimization goals are not available for use in Cruise Control operations. Unless you change the Cruise Control deployment configuration , AMQ Streams will inherit the following main optimization goals from Cruise Control, in descending priority order: Some of these goals are preset as hard goals . To reduce complexity, we recommend that you use the inherited main optimization goals, unless you need to completely exclude one or more goals from use in KafkaRebalance resources. The priority order of the main optimization goals can be modified, if desired, in the configuration for default optimization goals . You configure main optimization goals, if necessary, in the Cruise Control deployment configuration: Kafka.spec.cruiseControl.config.goals To accept the inherited main optimization goals, do not specify the goals property in Kafka.spec.cruiseControl.config . If you need to modify the inherited main optimization goals, specify a list of goals, in descending priority order, in the goals configuration option. Note To avoid errors when generating optimization proposals, make sure that any changes you make to the goals or default.goals in Kafka.spec.cruiseControl.config include all of the hard goals specified for the hard.goals property. To clarify, the hard goals must also be specified (as a subset) for the main optimization goals and default goals. 18.2.5. Default optimization goals Cruise Control uses the default optimization goals to generate the cached optimization proposal . For more information about the cached optimization proposal, see Section 18.3, "Optimization proposals overview" . You can override the default optimization goals by setting user-provided optimization goals in a KafkaRebalance custom resource. Unless you specify default.goals in the Cruise Control deployment configuration , the main optimization goals are used as the default optimization goals. In this case, the cached optimization proposal is generated using the main optimization goals. To use the main optimization goals as the default goals, do not specify the default.goals property in Kafka.spec.cruiseControl.config . To modify the default optimization goals, edit the default.goals property in Kafka.spec.cruiseControl.config . You must use a subset of the main optimization goals. Example Kafka configuration for default optimization goals apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # ... If no default optimization goals are specified, the cached proposal is generated using the main optimization goals. 18.2.6. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, in spec.goals in a KafkaRebalance custom resource: User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you create a KafkaRebalance custom resource containing a single user-provided goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the main optimization goals To ignore the configured hard goals when generating an optimization proposal, add the skipHardGoalCheck: true property to the KafkaRebalance custom resource. See Section 18.6, "Generating optimization proposals" . Additional resources Configuring and deploying Cruise Control with Kafka Configurations in the Cruise Control Wiki. 18.3. Optimization proposals overview Configure a KafkaRebalance resource to generate optimization proposals and apply the suggested changes. An optimization proposal is a summary of proposed changes that would produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources . All optimization proposals are estimates of the impact of a proposed rebalance. You can approve or reject a proposal. You cannot approve a cluster rebalance without first generating the optimization proposal. You can run optimization proposals in one of the following rebalancing modes: full add-brokers remove-brokers 18.3.1. Rebalancing modes You specify a rebalancing mode using the spec.mode property of the KafkaRebalance custom resource. full The full mode runs a full rebalance by moving replicas across all the brokers in the cluster. This is the default mode if the spec.mode property is not defined in the KafkaRebalance custom resource. add-brokers The add-brokers mode is used after scaling up a Kafka cluster by adding one or more brokers. Normally, after scaling up a Kafka cluster, new brokers are used to host only the partitions of newly created topics. If no new topics are created, the newly added brokers are not used and the existing brokers remain under the same load. By using the add-brokers mode immediately after adding brokers to the cluster, the rebalancing operation moves replicas from existing brokers to the newly added brokers. You specify the new brokers as a list using the spec.brokers property of the KafkaRebalance custom resource. remove-brokers The remove-brokers mode is used before scaling down a Kafka cluster by removing one or more brokers. If you scale down a Kafka cluster, brokers are shut down even if they host replicas. This can lead to under-replicated partitions and possibly result in some partitions being under their minimum ISR (in-sync replicas). To avoid this potential problem, the remove-brokers mode moves replicas off the brokers that are going to be removed. When these brokers are not hosting replicas anymore, you can safely run the scaling down operation. You specify the brokers you're removing as a list in the spec.brokers property in the KafkaRebalance custom resource. In general, use the full rebalance mode to rebalance a Kafka cluster by spreading the load across brokers. Use the add-brokers and remove-brokers modes only if you want to scale your cluster up or down and rebalance the replicas accordingly. The procedure to run a rebalance is actually the same across the three different modes. The only difference is with specifying a mode through the spec.mode property and, if needed, listing brokers that have been added or will be removed through the spec.brokers property. 18.3.2. The results of an optimization proposal When an optimization proposal is generated, a summary and broker load is returned. Summary The summary is contained in the KafkaRebalance resource. The summary provides an overview of the proposed cluster rebalance and indicates the scale of the changes involved. A summary of a successfully generated optimization proposal is contained in the Status.OptimizationResult property of the KafkaRebalance resource. The information provided is a summary of the full optimization proposal. Broker load The broker load is stored in a ConfigMap that contains data as a JSON string. The broker load shows before and after values for the proposed rebalance, so you can see the impact on each of the brokers in the cluster. 18.3.3. Manually approving or rejecting an optimization proposal An optimization proposal summary shows the proposed scope of changes. You can use the name of the KafkaRebalance resource to return a summary from the command line. Returning an optimization proposal summary oc describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace> You can also use the jq command line JSON parser tool. Returning an optimization proposal summary using jq oc get kafkarebalance -o json | jq <jq_query> . Use the summary to decide whether to approve or reject an optimization proposal. Approving an optimization proposal You approve the optimization proposal by setting the strimzi.io/rebalance annotation of the KafkaRebalance resource to approve . Cruise Control applies the proposal to the Kafka cluster and starts a cluster rebalance operation. Rejecting an optimization proposal If you choose not to approve an optimization proposal, you can change the optimization goals or update any of the rebalance performance tuning options , and then generate another proposal. You can generate a new optimization proposal for a KafkaRebalance resource by setting the strimzi.io/rebalance annotation to refresh . Use optimization proposals to assess the movements required for a rebalance. For example, a summary describes inter-broker and intra-broker movements. Inter-broker rebalancing moves data between separate brokers. Intra-broker rebalancing moves data between disks on the same broker when you are using a JBOD storage configuration. Such information can be useful even if you don't go ahead and approve the proposal. You might reject an optimization proposal, or delay its approval, because of the additional load on a Kafka cluster when rebalancing. In the following example, the proposal suggests the rebalancing of data between separate brokers. The rebalance involves the movement of 55 partition replicas, totaling 12MB of data, across the brokers. Though the inter-broker movement of partition replicas has a high impact on performance, the total amount of data is not large. If the total data was much larger, you could reject the proposal, or time when to approve the rebalance to limit the impact on the performance of the Kafka cluster. Rebalance performance tuning options can help reduce the impact of data movement. If you can extend the rebalance period, you can divide the rebalance into smaller batches. Fewer data movements at a single time reduces the load on the cluster. Example optimization proposal summary Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: # ... Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184 The proposal will also move 24 partition leaders to different brokers. This requires a change to the ZooKeeper configuration, which has a low impact on performance. The balancedness scores are measurements of the overall balance of the Kafka cluster before and after the optimization proposal is approved. A balancedness score is based on optimization goals. If all goals are satisfied, the score is 100. The score is reduced for each goal that will not be met. Compare the balancedness scores to see whether the Kafka cluster is less balanced than it could be following a rebalance. 18.3.4. Automatically approving an optimization proposal To save time, you can automate the process of approving optimization proposals. With automation, when you generate an optimization proposal it goes straight into a cluster rebalance. To enable the optimization proposal auto-approval mechanism, create the KafkaRebalance resource with the strimzi.io/rebalance-auto-approval annotation set to true . If the annotation is not set or set to false , the optimization proposal requires manual approval. Example rebalance request with auto-approval mechanism enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: "true" spec: mode: # any mode # ... You can still check the status when automatically approving an optimization proposal. The status of the KafkaRebalance resource moves to Ready when the rebalance is complete. 18.3.5. Optimization proposal summary properties The following table explains the properties contained in the optimization proposal's summary section. Table 18.1. Properties contained in an optimization proposal summary JSON property Description numIntraBrokerReplicaMovements The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but lower than numReplicaMovements . excludedBrokersForLeadership Not yet supported. An empty list is returned. numReplicaMovements The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. onDemandBalancednessScoreBefore, onDemandBalancednessScoreAfter A measurement of the overall balancedness of a Kafka Cluster, before and after the optimization proposal was generated. The score is calculated by subtracting the sum of the BalancednessScore of each violated soft goal from 100. Cruise Control assigns a BalancednessScore to every optimization goal based on several factors, including priority- the goal's position in the list of default.goals or user-provided goals. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. intraBrokerDataToMoveMB The sum of the size of each partition replica that will be moved between disks on the same broker (see also numIntraBrokerReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see dataToMoveMB ). recentWindows The number of metrics windows upon which the optimization proposal is based. dataToMoveMB The sum of the size of each partition replica that will be moved to a separate broker (see also numReplicaMovements ). Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. monitoredPartitionsPercentage The percentage of partitions in the Kafka cluster covered by the optimization proposal. Affected by the number of excludedTopics . excludedTopics If you specified a regular expression in the spec.excludedTopicsRegex property in the KafkaRebalance resource, all topic names matching that expression are listed here. These topics are excluded from the calculation of partition replica/leader movements in the optimization proposal. numLeaderMovements The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. excludedBrokersForReplicaMove Not yet supported. An empty list is returned. 18.3.6. Broker load properties The broker load is stored in a ConfigMap (with the same name as the KafkaRebalance custom resource) as a JSON formatted string. This JSON string consists of a JSON object with keys for each broker IDs linking to a number of metrics for each broker. Each metric consist of three values. The first is the metric value before the optimization proposal is applied, the second is the expected value of the metric after the proposal is applied, and the third is the difference between the first two values (after minus before). Note The ConfigMap appears when the KafkaRebalance resource is in the ProposalReady state and remains after the rebalance is complete. You can use the name of the ConfigMap to view its data from the command line. Returning ConfigMap data oc describe configmaps <my_rebalance_configmap_name> -n <namespace> You can also use the jq command line JSON parser tool to extract the JSON string from the ConfigMap. Extracting the JSON string from the ConfigMap using jq oc get configmaps <my_rebalance_configmap_name> -o json | jq '.["data"]["brokerLoad.json"]|fromjson|.' The following table explains the properties contained in the optimization proposal's broker load ConfigMap: JSON property Description leaders The number of replicas on this broker that are partition leaders. replicas The number of replicas on this broker. cpuPercentage The CPU utilization as a percentage of the defined capacity. diskUsedPercentage The disk utilization as a percentage of the defined capacity. diskUsedMB The absolute disk usage in MB. networkOutRate The total network output rate for the broker. leaderNetworkInRate The network input rate for all partition leader replicas on this broker. followerNetworkInRate The network input rate for all follower replicas on this broker. potentialMaxNetworkOutRate The hypothetical maximum network output rate that would be realized if this broker became the leader of all the replicas it currently hosts. 18.3.7. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals. Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. If you generate an optimization proposal using the default optimization goals, Cruise Control returns the most recent cached proposal. To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the Cruise Control deployment configuration. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Additional resources Section 18.2, "Optimization goals overview" Section 18.6, "Generating optimization proposals" Section 18.7, "Approving an optimization proposal" 18.4. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. 18.4.1. Partition reassignment commands Optimization proposals are comprised of separate partition reassignment commands. When you approve a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement: Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement: This involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. 18.4.2. Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which simply applies the commands in the order they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides four alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in order of ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in order of descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. PrioritizeMinIsrWithOfflineReplicasStrategy : Prioritize reassignments with (At/Under)MinISR partitions with offline replicas. This strategy will only work if cruiseControl.config.concurrency.adjuster.min.isr.check.enabled is set to true in the Kafka custom resource's spec. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. 18.4.3. Intra-broker disk balancing Moving a large amount of data between disks on the same broker has less impact than between separate brokers. If you are running a Kafka deployment that uses JBOD storage with multiple disks on the same broker, Cruise Control can balance partitions between the disks. Note If you are using JBOD storage with a single disk, intra-broker disk balancing will result in a proposal with 0 partition movements since there are no disks to balance between. To perform an intra-broker disk balance, set rebalanceDisk to true under the KafkaRebalance.spec . When setting rebalanceDisk to true , do not set a goals field in the KafkaRebalance.spec , as Cruise Control will automatically set the intra-broker goals and ignore the inter-broker goals. Cruise Control does not perform inter-broker and intra-broker balancing at the same time. 18.4.4. Rebalance tuning options Cruise Control provides several configuration options for tuning the rebalance parameters discussed above. You can set these tuning options when configuring and deploying Cruise Control with Kafka or optimization proposal levels: The Cruise Control server setting can be set in the Kafka custom resource under Kafka.spec.cruiseControl.config . The individual rebalance performance configurations can be set under KafkaRebalance.spec . The relevant configurations are summarized in the following table. Table 18.2. Rebalance performance tuning configuration Cruise Control properties KafkaRebalance properties Default Description num.concurrent.partition.movements.per.broker concurrentPartitionMovementsPerBroker 5 The maximum number of inter-broker partition movements in each partition reassignment batch num.concurrent.intra.broker.partition.movements concurrentIntraBrokerPartitionMovements 2 The maximum number of intra-broker partition movements in each partition reassignment batch num.concurrent.leader.movements concurrentLeaderMovements 1000 The maximum number of partition leadership changes in each partition reassignment batch default.replication.throttle replicationThrottle Null (no limit) The bandwidth (in bytes per second) to assign to partition reassignment default.replica.movement.strategies replicaMovementStrategies BaseReplicaMovementStrategy The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. For the server setting, use a comma separated string with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the KafkaRebalance resource setting use a YAML array of strategy class names. - rebalanceDisk false Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources CruiseControlSpec schema reference KafkaRebalanceSpec schema reference 18.5. Configuring and deploying Cruise Control with Kafka Configure a Kafka resource to deploy Cruise Control alongside a Kafka cluster. You can use the cruiseControl properties of the Kafka resource to configure the deployment. Deploy one instance of Cruise Control per Kafka cluster. Use goals configuration in the Cruise Control config to specify optimization goals for generating optimization proposals. You can use brokerCapacity to change the default capacity limits for goals related to resource distribution. If brokers are running on nodes with heterogeneous network resources, you can use overrides to set network capacity limits for each broker. If an empty object ( {} ) is used for the cruiseControl configuration, all properties use their default values. For more information on the configuration options for Cruise Control, see the AMQ Streams Custom Resource API Reference . Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the cruiseControl property for the Kafka resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s overrides: 2 - brokers: [0] inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s # ... config: 3 # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > 4 com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # ... hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # ... cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true 5 webserver.http.cors.origin: "*" webserver.http.cors.exposeheaders: "User-Task-ID,Content-Type" # ... resources: 6 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 7 type: inline loggers: rootLogger.level: INFO template: 8 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 10 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml # ... 1 Capacity limits for broker resources. 2 Overrides set network capacity limits for specific brokers when running on nodes with heterogeneous network resources. 3 Cruise Control configuration. Standard Cruise Control configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 4 Optimization goals configuration, which can include configuration for default optimization goals ( default.goals ), main optimization goals ( goals ), and hard goals ( hard.goals ). 5 CORS enabled and configured for read-only access to the Cruise Control API. 6 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 7 Cruise Control loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties key in the ConfigMap. Cruise Control has a single logger named rootLogger.level . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 8 Template customization. Here a pod is scheduled with additional security attributes. 9 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 10 Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter). Create or update the resource: oc apply -f <kafka_configuration_file> Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1 my-cluster is the name of the Kafka cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . Auto-created topics The following table shows the three topics that are automatically created when Cruise Control is deployed. These topics are required for Cruise Control to work properly and must not be deleted or changed. You can change the name of the topic using the specified configuration option. Table 18.3. Auto-created topics Auto-created topic configuration Default topic name Created by Function metric.reporter.topic strimzi.cruisecontrol.metrics AMQ Streams Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. partition.metric.sample.store.topic strimzi.cruisecontrol.partitionmetricsamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . broker.metric.sample.store.topic strimzi.cruisecontrol.modeltrainingsamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To prevent the removal of records that are needed by Cruise Control, log compaction is disabled in the auto-created topics. Note If the names of the auto-created topics are changed in a Kafka cluster that already has Cruise Control enabled, the old topics will not be deleted and should be manually removed. What to do After configuring and deploying Cruise Control, you can generate optimization proposals . Additional resources Optimization goals overview 18.6. Generating optimization proposals When you create or update a KafkaRebalance resource, Cruise Control generates an optimization proposal for the Kafka cluster based on the configured optimization goals . Analyze the information in the optimization proposal and decide whether to approve it. You can use the results of the optimization proposal to rebalance your Kafka cluster. You can run the optimization proposal in one of the following modes: full (default) add-brokers remove-brokers The mode you use depends on whether you are rebalancing across all the brokers already running in the Kafka cluster; or you want to rebalance after scaling up or before scaling down your Kafka cluster. For more information, see Rebalancing modes with broker scaling . Prerequisites You have deployed Cruise Control to your AMQ Streams cluster. You have configured optimization goals and, optionally, capacity limits on broker resources. For more information on configuring Cruise Control, see Section 18.5, "Configuring and deploying Cruise Control with Kafka" . Procedure Create a KafkaRebalance resource and specify the appropriate mode. full mode (default) To use the default optimization goals defined in the Kafka resource, leave the spec property empty. Cruise Control rebalances a Kafka cluster in full mode by default. Example configuration with full rebalancing by default apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {} You can also run a full rebalance by specifying the full mode through the spec.mode property. Example configuration specifying full mode apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: full add-brokers mode If you want to rebalance a Kafka cluster after scaling up, specify the add-brokers mode. In this mode, existing replicas are moved to the newly added brokers. You need to specify the brokers as a list. Example configuration specifying add-brokers mode apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: add-brokers brokers: [3, 4] 1 1 List of newly added brokers added by the scale up operation. This property is mandatory. remove-brokers mode If you want to rebalance a Kafka cluster before scaling down, specify the remove-brokers mode. In this mode, replicas are moved off the brokers that are going to be removed. You need to specify the brokers that are being removed as a list. Example configuration specifying remove-brokers mode apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-brokers brokers: [3, 4] 1 1 List of brokers to be removed by the scale down operation. This property is mandatory. Note The following steps and the steps to approve or stop a rebalance are the same regardless of the rebalance mode you are using. To configure user-provided optimization goals instead of using the default goals, add the goals property and enter one or more goals. In the following example, rack awareness and replica capacity are configured as user-provided optimization goals: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal To ignore the configured hard goals, add the skipHardGoalCheck: true property: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true (Optional) To approve the optimization proposal automatically, set the strimzi.io/rebalance-auto-approval annotation to true : apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: "true" spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true Create or update the resource: oc apply -f <kafka_rebalance_configuration_file> The Cluster Operator requests the optimization proposal from Cruise Control. This might take a few minutes depending on the size of the Kafka cluster. If you used the automatic approval mechanism, wait for the status of the optimization proposal to change to Ready . If you haven't enabled the automatic approval mechanism, wait for the status of the optimization proposal to change to ProposalReady : oc get kafkarebalance -o wide -w -n <namespace> PendingProposal A PendingProposal status means the rebalance operator is polling the Cruise Control API to check if the optimization proposal is ready. ProposalReady A ProposalReady status means the optimization proposal is ready for review and approval. When the status changes to ProposalReady , the optimization proposal is ready to approve. Review the optimization proposal. The optimization proposal is contained in the Status.Optimization Result property of the KafkaRebalance resource. oc describe kafkarebalance <kafka_rebalance_resource_name> Example optimization proposal Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c The properties in the Optimization Result section describe the pending cluster rebalance operation. For descriptions of each property, see Contents of optimization proposals . Insufficient CPU capacity If a Kafka cluster is overloaded in terms of CPU utilization, you might see an insufficient CPU capacity error in the KafkaRebalance status. It's worth noting that this utilization value is unaffected by the excludedTopics configuration. Although optimization proposals will not reassign replicas of excluded topics, their load is still considered in the utilization calculation. Example CPU utilization error com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Note The error shows CPU capacity as a percentage rather than the number of CPU cores. For this reason, it does not directly map to the number of CPUs configured in the Kafka custom resource. It is like having a single virtual CPU per broker, which has the cycles of the CPUs configured in Kafka.spec.kafka.resources.limits.cpu . This has no effect on the rebalance behavior, since the ratio between CPU utilization and capacity remains the same. What to do Section 18.7, "Approving an optimization proposal" Additional resources Section 18.3, "Optimization proposals overview" 18.7. Approving an optimization proposal You can approve an optimization proposal generated by Cruise Control, if its status is ProposalReady . Cruise Control will then apply the optimization proposal to the Kafka cluster, reassigning partitions to brokers and changing partition leadership. Caution This is not a dry run. Before you approve an optimization proposal, you must: Refresh the proposal in case it has become out of date. Carefully review the contents of the proposal . Prerequisites You have generated an optimization proposal from Cruise Control. The KafkaRebalance custom resource status is ProposalReady . Procedure Perform these steps for the optimization proposal that you want to approve. Unless the optimization proposal is newly generated, check that it is based on current information about the state of the Kafka cluster. To do so, refresh the optimization proposal to make sure it uses the latest cluster metrics: Annotate the KafkaRebalance resource in OpenShift with strimzi.io/rebalance=refresh : oc annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=refresh Wait for the status of the optimization proposal to change to ProposalReady : oc get kafkarebalance -o wide -w -n <namespace> PendingProposal A PendingProposal status means the rebalance operator is polling the Cruise Control API to check if the optimization proposal is ready. ProposalReady A ProposalReady status means the optimization proposal is ready for review and approval. When the status changes to ProposalReady , the optimization proposal is ready to approve. Approve the optimization proposal that you want Cruise Control to apply. Annotate the KafkaRebalance resource in OpenShift with strimzi.io/rebalance=approve : oc annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=approve The Cluster Operator detects the annotated resource and instructs Cruise Control to rebalance the Kafka cluster. Wait for the status of the optimization proposal to change to Ready : oc get kafkarebalance -o wide -w -n <namespace> Rebalancing A Rebalancing status means the rebalancing is in progress. Ready A Ready status means the rebalance is complete. NotReady A NotReady status means an error occurred- see Fixing problems with a KafkaRebalance resource . When the status changes to Ready , the rebalance is complete. To use the same KafkaRebalance custom resource to generate another optimization proposal, apply the refresh annotation to the custom resource. This moves the custom resource to the PendingProposal or ProposalReady state. You can then review the optimization proposal and approve it, if desired. Additional resources Section 18.3, "Optimization proposals overview" Section 18.8, "Stopping a cluster rebalance" 18.8. Stopping a cluster rebalance Once started, a cluster rebalance operation might take some time to complete and affect the overall performance of the Kafka cluster. If you want to stop a cluster rebalance operation that is in progress, apply the stop annotation to the KafkaRebalance custom resource. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to prior to the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites You have approved the optimization proposal by annotating the KafkaRebalance custom resource with approve . The status of the KafkaRebalance custom resource is Rebalancing . Procedure Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to Stopped . Additional resources Section 18.3, "Optimization proposals overview" 18.9. Fixing problems with a KafkaRebalance resource If an issue occurs when creating a KafkaRebalance resource or interacting with Cruise Control, the error is reported in the resource status, along with details of how to fix it. The resource also moves to the NotReady state. To continue with the cluster rebalance operation, you must fix the problem in the KafkaRebalance resource itself or with the overall Cruise Control deployment. Problems might include the following: A misconfigured parameter in the KafkaRebalance resource. The strimzi.io/cluster label for specifying the Kafka cluster in the KafkaRebalance resource is missing. The Cruise Control server is not deployed as the cruiseControl property in the Kafka resource is missing. The Cruise Control server is not reachable. After fixing the issue, you need to add the refresh annotation to the KafkaRebalance resource. During a "refresh", a new optimization proposal is requested from the Cruise Control server. Prerequisites You have approved an optimization proposal . The status of the KafkaRebalance custom resource for the rebalance operation is NotReady . Procedure Get information about the error from the KafkaRebalance status: oc describe kafkarebalance rebalance-cr-name Attempt to resolve the issue in the KafkaRebalance resource. Annotate the KafkaRebalance resource in OpenShift: oc annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh Check the status of the KafkaRebalance resource: oc describe kafkarebalance rebalance-cr-name Wait until the status changes to PendingProposal , or directly to ProposalReady . Additional resources Section 18.3, "Optimization proposals overview"
[ "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal #", "RackAwareGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal #", "KafkaRebalance.spec.goals", "describe kafkarebalance <kafka_rebalance_resource_name> -n <namespace>", "get kafkarebalance -o json | jq <jq_query> .", "Name: my-rebalance Namespace: myproject Labels: strimzi.io/cluster=my-cluster Annotations: API Version: kafka.strimzi.io/v1alpha1 Kind: KafkaRebalance Metadata: Status: Conditions: Last Transition Time: 2022-04-05T14:36:11.900Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 12 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 24 Num Replica Movements: 55 On Demand Balancedness Score After: 82.91290759174306 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 5 Session Id: a4f833bd-2055-4213-bfdd-ad21f95bf184", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: mode: # any mode #", "describe configmaps <my_rebalance_configmap_name> -n <namespace>", "get configmaps <my_rebalance_configmap_name> -o json | jq '.[\"data\"][\"brokerLoad.json\"]|fromjson|.'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: brokerCapacity: 1 inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s overrides: 2 - brokers: [0] inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s # config: 3 # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > 4 com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal # hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal # cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true 5 webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" # resources: 6 requests: cpu: 1 memory: 512Mi limits: cpu: 2 memory: 2Gi logging: 7 type: inline loggers: rootLogger.level: INFO template: 8 pod: metadata: labels: label1: value1 securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 10 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml", "apply -f <kafka_configuration_file>", "get deployments -n <my_cluster_operator_namespace>", "NAME READY UP-TO-DATE AVAILABLE my-cluster-cruise-control 1/1 1 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: {}", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: full", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: add-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: mode: remove-brokers brokers: [3, 4] 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: my-cluster annotations: strimzi.io/rebalance-auto-approval: \"true\" spec: goals: - RackAwareGoal - ReplicaCapacityGoal skipHardGoalCheck: true", "apply -f <kafka_rebalance_configuration_file>", "get kafkarebalance -o wide -w -n <namespace>", "describe kafkarebalance <kafka_rebalance_resource_name>", "Status: Conditions: Last Transition Time: 2020-05-19T13:50:12.533Z Status: ProposalReady Type: State Observed Generation: 1 Optimization Result: Data To Move MB: 0 Excluded Brokers For Leadership: Excluded Brokers For Replica Move: Excluded Topics: Intra Broker Data To Move MB: 0 Monitored Partitions Percentage: 100 Num Intra Broker Replica Movements: 0 Num Leader Movements: 0 Num Replica Movements: 26 On Demand Balancedness Score After: 81.8666802863978 On Demand Balancedness Score Before: 78.01176356230222 Recent Windows: 1 Session Id: 05539377-ca7b-45ef-b359-e13564f1458c", "com.linkedin.kafka.cruisecontrol.exception.OptimizationFailureException: [CpuCapacityGoal] Insufficient capacity for cpu (Utilization 615.21, Allowed Capacity 420.00, Threshold: 0.70). Add at least 3 brokers with the same cpu capacity (100.00) as broker-0. Add at least 3 brokers with the same cpu capacity (100.00) as broker-0.", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=refresh", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance <kafka_rebalance_resource_name> strimzi.io/rebalance=approve", "get kafkarebalance -o wide -w -n <namespace>", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=stop", "describe kafkarebalance rebalance-cr-name", "describe kafkarebalance rebalance-cr-name", "annotate kafkarebalance rebalance-cr-name strimzi.io/rebalance=refresh", "describe kafkarebalance rebalance-cr-name" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/cruise-control-concepts-str
Deploying the Shared File Systems service with CephFS through NFS
Deploying the Shared File Systems service with CephFS through NFS Red Hat OpenStack Platform 16.0 Understanding, using, and managing the Shared File Systems service with CephFS through NFS in Red Hat OpenStack Platform OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/index
Chapter 2. Resources for troubleshooting automation controller
Chapter 2. Resources for troubleshooting automation controller For information about troubleshooting automation controller, see Troubleshooting automation controller in the Automation Controller Administration Guide. For information about troubleshooting the performance of automation controller, see Performance troubleshooting for automation controller in the Automation Controller Administration Guide.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/troubleshooting_ansible_automation_platform/troubleshoot-controller
Chapter 17. Minimizing system latency by isolating interrupts and user processes
Chapter 17. Minimizing system latency by isolating interrupts and user processes Real-time environments need to minimize or eliminate latency when responding to various events. To do this, you can isolate interrupts (IRQs) from user processes from one another on different dedicated CPUs. 17.1. Interrupt and process binding Isolating interrupts (IRQs) from user processes on different dedicated CPUs can minimize or eliminate latency in real-time environments. Interrupts are generally shared evenly between CPUs. This can delay interrupt processing when the CPU has to write new data and instruction caches. These interrupt delays can cause conflicts with other processing being performed on the same CPU. It is possible to allocate time-critical interrupts and processes to a specific CPU (or a range of CPUs). In this way, the code and data structures for processing this interrupt will most likely be in the processor and instruction caches. As a result, the dedicated process can run as quickly as possible, while all other non-time-critical processes run on the other CPUs. This can be particularly important where the speeds involved are near or at the limits of memory and available peripheral bus bandwidth. Any wait for memory to be fetched into processor caches will have a noticeable impact in overall processing time and determinism. In practice, optimal performance is entirely application-specific. For example, tuning applications with similar functions for different companies, required completely different optimal performance tunings. One firm saw optimal results when they isolated 2 out of 4 CPUs for operating system functions and interrupt handling. The remaining 2 CPUs were dedicated purely for application handling. Another firm found optimal determinism when they bound the network related application processes onto a single CPU which was handling the network device driver interrupt. Important To bind a process to a CPU, you usually need to know the CPU mask for a given CPU or range of CPUs. The CPU mask is typically represented as a 32-bit bitmask, a decimal number, or a hexadecimal number, depending on the command you are using. Table 17.1. Example of the CPU Mask for given CPUs CPUs Bitmask Decimal Hexadecimal 0 00000000000000000000000000000001 1 0x00000001 0, 1 00000000000000000000000000000011 3 0x00000011 17.2. Disabling the irqbalance daemon The irqbalance daemon is enabled by default and periodically forces interrupts to be handled by CPUs in an even manner. However in real-time deployments, irqbalance is not needed, because applications are typically bound to specific CPUs. Procedure Check the status of irqbalance . If irqbalance is running, disable it, and stop it. Verification Check that the irqbalance status is inactive. 17.3. Excluding CPUs from IRQ balancing You can use the IRQ balancing service to specify which CPUs you want to exclude from consideration for interrupt (IRQ) balancing. The IRQBALANCE_BANNED_CPUS parameter in the /etc/sysconfig/irqbalance configuration file controls these settings. The value of the parameter is a 64-bit hexadecimal bit mask, where each bit of the mask represents a CPU core. Procedure Open /etc/sysconfig/irqbalance in your preferred text editor and find the section of the file titled IRQBALANCE_BANNED_CPUS . Uncomment the IRQBALANCE_BANNED_CPUS variable. Enter the appropriate bitmask to specify the CPUs to be ignored by the IRQ balance mechanism. Save and close the file. Restart the irqbalance service for the changes to take effect: Note If you are running a system with up to 64 CPU cores, separate each group of eight hexadecimal digits with a comma. For example: IRQBALANCE_BANNED_CPUS=00000001,0000ff00 Table 17.2. Examples CPUs Bitmask 0 00000001 8 - 15 0000ff00 8 - 15, 33 00000002,0000ff00 Note In RHEL 7.2 and higher, the irqbalance utility automatically avoids IRQs on CPU cores isolated via the isolcpus kernel parameter if IRQBALANCE_BANNED_CPUS is not set in /etc/sysconfig/irqbalance . 17.4. Manually assigning CPU affinity to individual IRQs Assigning CPU affinity enables binding and unbinding processes and threads to a specified CPU or range of CPUs. This can reduce caching problems. Procedure Check the IRQs in use by each device by viewing the /proc/interrupts file. Each line shows the IRQ number, the number of interrupts that happened in each CPU, followed by the IRQ type and a description. Write the CPU mask to the smp_affinity entry of a specific IRQ. The CPU mask must be expressed as a hexadecimal number. For example, the following command instructs IRQ number 142 to run only on CPU 0. The change only takes effect when an interrupt occurs. Verification Perform an activity that will trigger the specified interrupt. Check /proc/interrupts for changes. The number of interrupts on the specified CPU for the configured IRQ increased, and the number of interrupts for the configured IRQ on CPUs outside the specified affinity did not increase. 17.5. Binding processes to CPUs with the taskset utility The taskset utility uses the process ID (PID) of a task to view or set its CPU affinity. You can use the utility to run a command with a chosen CPU affinity. To set the affinity, you need to get the CPU mask to be as a decimal or hexadecimal number. The mask argument is a bitmask that specifies which CPU cores are legal for the command or PID being modified. Important The taskset utility works on a NUMA (Non-Uniform Memory Access) system, but it does not allow the user to bind threads to CPUs and the closest NUMA memory node. On such systems, taskset is not the preferred tool, and the numactl utility should be used instead for its advanced capabilities. For more information, see the numactl(8) man page on your system. Procedure Run taskset with the necessary options and arguments. You can specify a CPU list using the -c parameter instead of a CPU mask. In this example, my_embedded_process is being instructed to run only on CPUs 0,4,7-11. This invocation is more convenient in most cases. To set the affinity of a process that is not currently running, use taskset and specify the CPU mask and the process. In this example, my_embedded_process is being instructed to use only CPU 3 (using the decimal version of the CPU mask). You can specify more than one CPU in the bitmask. In this example, my_embedded_process is being instructed to execute on processors 4, 5, 6, and 7 (using the hexadecimal version of the CPU mask). You can set the CPU affinity for processes that are already running by using the -p ( --pid ) option with the CPU mask and the PID of the process you want to change. In this example, the process with a PID of 7013 is being instructed to run only on CPU 0. Note You can combine the listed options. Additional resources taskset(1) and numactl(8) man pages on your system
[ "systemctl status irqbalance irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled) Active: active (running) ...", "systemctl disable irqbalance systemctl stop irqbalance", "systemctl status irqbalance", "IRQBALANCE_BANNED_CPUS 64 bit bitmask which allows you to indicate which cpu's should be skipped when reblancing irqs. Cpu numbers which have their corresponding bits set to one in this mask will not have any irq's assigned to them on rebalance # #IRQBALANCE_BANNED_CPUS=", "systemctl restart irqbalance", "cat /proc/interrupts", "CPU0 CPU1 0: 26575949 11 IO-APIC-edge timer 1: 14 7 IO-APIC-edge i8042", "echo 1 > /proc/irq/142/smp_affinity", "taskset -c 0,4,7-11 /usr/local/bin/my_embedded_process", "taskset 8 /usr/local/bin/my_embedded_process", "taskset 0xF0 /usr/local/bin/my_embedded_process", "taskset -p 1 7013" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_binding-interrupts-and-processes_optimizing-rhel9-for-real-time-for-low-latency-operation
14.8.10. Edit Domain XML Configuration Files
14.8.10. Edit Domain XML Configuration Files save-image-edit file --running --paused command edits the XML configuration file that is associated with a saved file that was created by the virsh save command. Note that the save image records whether the domain should be restored to a --running or --paused state. Without using these options the state is determined by the file itself. By selecting --running or --paused you can overwrite the state that virsh restore should use.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-Edit_Domain_XML_configuration_files
Chapter 28. Graphics Driver and Miscellaneous Driver Updates
Chapter 28. Graphics Driver and Miscellaneous Driver Updates The hv_utils driver, which implements guest/host integration for Hyper-V guests, has been updated to the latest upstream version. The drm subsystem drivers (ast, bochs, cirrus, gma500, i915, mga200, nouveau, qxl, radeon, udl, and vmwgfx) have been updated to version 4.4. The xorg-x11-drv-intel driver has been updated to the latest upstream version.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/misc_drivers
Chapter 4. Configuring Red Hat Quay
Chapter 4. Configuring Red Hat Quay Before running the Red Hat Quay service as a container, you need to use that same Quay container to create the configuration file ( config.yaml ) needed to deploy Red Hat Quay. To do that, you pass a config argument and a password (replace my-secret-password here) to the Quay container. Later, you use that password to log into the configuration tool as the user quayconfig . Here's an example of how to do that: Start quay in setup mode : On the first quay node, run the following: Open browser : When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example http://myquay.example.com:8080 ). You are prompted for a username and password. Log in as quayconfig : When prompted, enter the quayconfig username and password (the one from the podman run command line). Fill in the required fields : When you start the config tool without mounting an existing configuration bundle, you will be booted into an initial setup session. In a setup session, default values will be filled automatically. The following steps will walk through how to fill out the remaining required fields. Identify the database : For the initial setup, you must include the following information about the type and location of the database to be used by Red Hat Quay: Database Type : Choose MySQL or PostgreSQL. MySQL will be used in the basic example; PostgreSQL is used with the high availability Red Hat Quay on OpenShift examples. Database Server : Identify the IP address or hostname of the database, along with the port number if it is different from 3306. Username : Identify a user with full access to the database. Password : Enter the password you assigned to the selected user. Database Name : Enter the database name you assigned when you started the database server. SSL Certificate : For production environments, you should provide an SSL certificate to connect to the database. The following figure shows an example of the screen for identifying the database used by Red Hat Quay: Identify the Redis hostname, Server Configuration and add other desired settings : Other setting you can add to complete the setup are as follows. More settings for high availability Red Hat Quay deployment that for the basic deployment: For the basic, test configuration, identifying the Redis Hostname should be all you need to do. However, you can add other features, such as Clair Scanning and Repository Mirroring, as described at the end of this procedure. For the high availability and OpenShift configurations, more settings are needed (as noted below) to allow for shared storage, secure communications between systems, and other features. Here are the settings you need to consider: Custom SSL Certificates : Upload custom or self-signed SSL certificates for use by Red Hat Quay. See Using SSL to protect connections to Red Hat Quay for details. Recommended for high availability. Important Using SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Red Hat Quay setup as an insecure registry as described in Test an Insecure Registry . Basic Configuration : Upload a company logo to rebrand your Red Hat Quay registry. Server Configuration : Hostname or IP address to reach the Red Hat Quay service, along with TLS indication (recommended for production installations). The Server Hostname is required for all Red Hat Quay deployments. TLS termination can be done in two different ways: On the instance itself, with all TLS traffic governed by the nginx server in the Quay container (recommended). On the load balancer. This is not recommended. Access to Red Hat Quay could be lost if the TLS setup is not done correctly on the load balancer. Data Consistency Settings : Select to relax logging consistency guarantees to improve performance and availability. Time Machine : Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times. redis : Identify the hostname or IP address (and optional password) to connect to the redis service used by Red Hat Quay. Repository Mirroring : Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Red Hat Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure. Registry Storage : Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Red Hat Quay high availability storage. On OpenShift, the example uses Amazon S3 storage. Action Log Storage Configuration : Action logs are stored in the Red Hat Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage . Action Log Rotation and Archiving : Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area. Security Scanner : Enable security scanning by selecting a security scanner endpoint and authentication key. To setup Clair to do image scanning, refer to Clair Setup and Configuring Clair . Recommended for high availability. Application Registry : Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification ). rkt Conversion : Allow rkt fetch to be used to fetch images from Red Hat Quay registry. Public and private GPG2 keys are needed. This field is deprecated. E-mail : Enable e-mail to use for notifications and user password resets. Internal Authentication : Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token. External Authorization (OAuth) : Enable to allow GitHub or GitHub Enterprise to authenticate to the registry. Google Authentication : Enable to allow Google to authenticate to the registry. Access Settings : Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion). Registry Protocol Settings : Leave the Restrict V1 Push Support checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled. Dockerfile Build Support : Enable to allow users to submit Dockerfiles to be built and pushed to Red Hat Quay. This is not recommended for multitenant environments. Validate the changes : Select Validate Configuration Changes . If validation is successful, you will be presented with the following Download Configuration modal: Download configuration : Select the Download Configuration button and save the tarball ( quay-config.tar.gz ) to a local directory to use later to start Red Hat Quay. At this point, you can shutdown the Red Hat Quay configuration tool and close your browser. , copy the tarball file to the system on which you want to install your first Red Hat Quay node. For a basic install, you might just be running Red Hat Quay on the same system.
[ "sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.13.3 config my-secret-password" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/configuring_red_hat_quay
probe::udp.recvmsg
probe::udp.recvmsg Name probe::udp.recvmsg - Fires whenever a UDP message is received Synopsis udp.recvmsg Values size Number of bytes received by the process sock Network socket used by the process daddr A string representing the destination IP address family IP address family name The name of this probe dport UDP destination port saddr A string representing the source IP address sport UDP source port Context The process which received a UDP message
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-udp-recvmsg
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview
Chapter 1. Migration from OpenShift Container Platform 3 to 4 overview OpenShift Container Platform 4 clusters are different from OpenShift Container Platform 3 clusters. OpenShift Container Platform 4 clusters contain new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. To learn more about migrating from OpenShift Container Platform 3 to 4 see About migrating from OpenShift Container Platform 3 to 4 . 1.1. Differences between OpenShift Container Platform 3 and 4 Before migrating from OpenShift Container Platform 3 to 4, you can check differences between OpenShift Container Platform 3 and 4 . Review the following information: Architecture Installation and update Storage , network , logging , security , and monitoring considerations 1.2. Planning network considerations Before migrating from OpenShift Container Platform 3 to 4, review the differences between OpenShift Container Platform 3 and 4 for information about the following areas: DNS considerations Isolating the DNS domain of the target cluster from the clients . Setting up the target cluster to accept the source DNS domain . You can migrate stateful application workloads from OpenShift Container Platform 3 to 4 at the granularity of a namespace. To learn more about MTC see Understanding MTC . Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . 1.3. Installing MTC Review the following tasks to install the MTC: Install the Migration Toolkit for Containers Operator on target cluster by using Operator Lifecycle Manager (OLM) . Install the legacy Migration Toolkit for Containers Operator on the source cluster manually . Configure object storage to use as a replication repository . 1.4. Upgrading MTC You upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.7 by using OLM. You upgrade MTC on OpenShift Container Platform 3 by reinstalling the legacy Migration Toolkit for Containers Operator. 1.5. Reviewing premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the premigration checklists . 1.6. Migrating applications You can migrate your applications by using the MTC web console or the command line . 1.7. Advanced migration options You can automate your migrations and modify MTC custom resources to improve the performance of large-scale migrations by using the following options: Running a state migration Creating migration hooks Editing, excluding, and mapping migrated resources Configuring the migration controller for large migrations 1.8. Troubleshooting migrations You can perform the following troubleshooting tasks: Viewing migration plan resources by using the MTC web console Viewing the migration plan aggregated log file Using the migration log reader Accessing performance metrics Using the must-gather tool Using the Velero CLI to debug Backup and Restore CRs Using MTC custom resources for troubleshooting Checking common issues and concerns 1.9. Rolling back a migration You can roll back a migration by using the MTC web console, by using the CLI, or manually. 1.10. Uninstalling MTC and deleting resources You can uninstall the MTC and delete its resources to clean up the cluster.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/migration-from-version-3-to-4-overview
Chapter 2. The Cargo build tool
Chapter 2. The Cargo build tool Cargo is a build tool and front end for the Rust compiler rustc as well as a package and dependency manager. It allows Rust projects to declare dependencies with specific version requirements, resolves the full dependency graph, downloads packages, and builds as well as tests your entire project. Rust Toolset is distributed with Cargo 1.79.0. 2.1. The Cargo directory structure and file placements The Cargo build tool uses set conventions for defining the directory structure and file placement within a Cargo package. Running the cargo new command generates the package directory structure and templates for both a manifest and a project file. By default, it also initializes a new Git repository in the package root directory. For a binary program, Cargo creates a directory project_name containing a text file named Cargo.toml and a subdirectory src containing a text file named main.rs . Additional resources For more information on the Cargo directory structure, see The Cargo Book - Package Layout . For in-depth information about Rust code organization, see The Rust Programming Language - Managing Growing Projects with Packages, Crates, and Modules . 2.2. Creating a Rust project Create a new Rust project that is set up according to the Cargo conventions. For more information on Cargo conventions, see Cargo directory structure and file placements . Procedure Create a Rust project by running the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with your project name. On Red Hat Enterprise Linux 9: Replace < project_name > with your project name. Note To edit the project code, edit the main executable file main.rs and add new source files to the src subdirectory. Additional resources For information on configuring your project and adding dependencies, see Configuring Rust project dependencies . 2.3. Creating a Rust library project Complete the following steps to create a Rust library project using the Cargo build tool. Procedure To create a Rust library project, run the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with the name of your Rust project. On Red Hat Enterprise Linux 9: Replace < project_name > with the name of your Rust project. Note To edit the project code, edit the source file, lib.rs , in the src subdirectory. Additional resources Managing Growing Projects with Packages, Crates, and Modules 2.4. Building a Rust project Build your Rust project using the Cargo build tool. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. By default, projects are built and compiled in debug mode. For information on compiling your project in release mode, see Building a Rust project in release mode . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be built when you do not need to build an executable file, run: 2.5. Building a Rust project in release mode Build your Rust project in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. For information on compiling your project in debug mode, see Building a Rust project . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build the project in release mode, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be build when you do not need to build an executable file, run: 2.6. Running a Rust program Run your Rust project using the Cargo build tool. Cargo first rebuilds your project and then runs the resulting executable file. If used during development, the cargo run command correctly resolves the output path independently of the build mode. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure To run a Rust program managed as a project by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Note If your program has not been built yet, Cargo builds your program before running it. 2.7. Testing a Rust project Test your Rust program using the Cargo build tool. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . By default, Rust projects are tested in debug mode. For information on testing your project in release mode, see Testing a Rust project in release mode . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.8. Testing a Rust project in release mode Test your Rust program in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . For information on testing your project in debug mode, see Testing a Rust project . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo in release mode, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.9. Configuring Rust project dependencies Configure the dependencies of your Rust project using the Cargo build tool. To specify dependencies for a project managed by Cargo, edit the file Cargo.toml in the project directory and rebuild your project. Cargo downloads the Rust code packages and their dependencies, stores them locally, builds all of the project source code including the dependency code packages, and runs the resulting executable. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure In your project directory, open the file Cargo.toml . Move to the section labelled [dependencies] . Each dependency is listed on a new line in the following format: Rust code packages are called crates. Edit your dependencies. Rebuild your project by running: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Run your project by using the following command: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on configuring Rust dependencies, see The Cargo Book - Specifying Dependencies . 2.10. Building documentation for a Rust project Use the Cargo tool to generate documentation from comments in your source code that are marked for extraction. Note that documentation comments are extracted only for public functions, variables, and members. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To mark comments for extraction, use three slashes /// and place your comment in the beginning of the line it is documenting. Cargo supports the Markdown language for your comments. To build project documentation using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: The generated documentation is located in the .target/doc directory. Additional resources For more information on building documentation using Cargo, see The Rust Programming Language - Making Useful Documentation Comments . 2.11. Compiling code into a WebAssembly binary with Rust on Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 Complete the following steps to install the WebAssembly standard library. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure To install the WebAssembly standard library, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To use WebAssembly with Cargo, run: On Red Hat Enterprise Linux 8: Replace < command > with the Cargo command you want to run. On Red Hat Enterprise Linux 9: Replace < command > with the Cargo command you want to run. Additional resources For more information on WebAssembly, see the official Rust and WebAssembly documentation or the Rust and WebAssembly book. 2.12. Vendoring Rust project dependencies Create a local copy of the dependencies of your Rust project for offline redistribution and reuse using the Cargo build tool. This procedure is called vendoring project dependencies. The vendored dependencies including Rust code packages for building your project on a Windows operating system are located in the vendor directory. Vendored dependencies can be used by Cargo without any connection to the internet. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To vendor your Rust project with dependencies using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.13. Additional resources For more information on Cargo, see the Official Cargo Guide . To display the manual page included in Rust Toolset, run: For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 9:
[ "cargo new --bin < project_name >", "cargo new --bin < project_name >", "cargo new --lib < project_name >", "cargo new --lib < project_name >", "cargo build", "cargo build", "cargo check", "cargo build --release", "cargo build --release", "cargo check", "cargo run", "cargo run", "cargo test", "cargo test", "cargo test --release", "cargo test --release", "crate_name = version", "cargo build", "cargo build", "cargo run", "cargo run", "cargo doc --no-deps", "cargo doc --no-deps", "yum install rust-std-static-wasm32-unknown-unknown", "dnf install rust-std-static-wasm32-unknown-unknown", "cargo < command > --target wasm32-unknown-unknown", "cargo < command > --target wasm32-unknown-unknown", "cargo vendor", "cargo vendor", "man cargo", "man cargo" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.79.0_toolset/assembly_the-cargo-build-tool
Chapter 14. Security
Chapter 14. Security OpenSSH chroot Shell Logins Generally, each Linux user is mapped to an SELinux user using SELinux policy, enabling Linux users to inherit the restrictions placed on SELinux users. There is a default mapping in which Linux users are mapped to the SELinux unconfined_u user. In Red Hat Enterprise Linux 7, the ChrootDirectory option for chrooting users can be used with unconfined users without any change, but for confined users, such as staff_u, user_u, or guest_u, the SELinux selinuxuser_use_ssh_chroot variable has to be set. Administrators are advised to use the guest_u user for all chrooted users when using the ChrootDirectory option to achieve higher security. OpenSSH - Multiple Required Authentications Red Hat Enterprise Linux 7 supports multiple required authentications in SSH protocol version 2 using the AuthenticationMethods option. This option lists one or more comma-separated lists of authentication method names. Successful completion of all the methods in any list is required for authentication to complete. This enables, for example, requiring a user to have to authenticate using the public key or GSSAPI before they are offered password authentication. GSS Proxy GSS Proxy is the system service that establishes GSS API Kerberos context on behalf of other applications. This brings security benefits; for example, in a situation when the access to the system keytab is shared between different processes, a successful attack against that process leads to Kerberos impersonation of all other processes. Changes in NSS The nss packages have been upgraded to upstream version 3.15.2. Message-Digest algorithm 2 (MD2), MD4, and MD5 signatures are no longer accepted for online certificate status protocol (OCSP) or certificate revocation lists (CRLs), consistent with their handling for general certificate signatures. Advanced Encryption Standard Galois Counter Mode (AES-GCM) Cipher Suite (RFC 5288 and RFC 5289) has been added for use when TLS 1.2 is negotiated. Specifically, the following cipher suites are now supported: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256; TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256; TLS_DHE_RSA_WITH_AES_128_GCM_SHA256; TLS_RSA_WITH_AES_128_GCM_SHA256. New Boolean Names Several SELinux boolean names have been changed to be more domain-specific. The old names can still be used, however, only the new names will appear in the lists of booleans. The old boolean names and their respective new names are available from the /etc/selinux/<policy_type>/booleans.subs_dist file. SCAP Workbench SCAP Workbench is a GUI front end that provides scanning functionality for SCAP content. SCAP Workbench is included as a Technology Preview in Red Hat Enterprise Linux 7. You can find detailed information on the website of the upstream project: https://fedorahosted.org/scap-workbench/ OSCAP Anaconda Add-On Red Hat Enterprise Linux 7 introduces the OSCAP Anaconda add-on as a Technology Preview. The add-on integrates OpenSCAP utilities with the installation process and enables installation of a system following restrictions given by SCAP content.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-security
Chapter 14. Web Servers
Chapter 14. Web Servers A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol ( HTTP ). The web servers available in Red Hat Enterprise Linux 7 are: Apache HTTP Server nginx Important Note that the nginx web server is available only as a Software Collection for Red Hat Enterprise Linux 7. See the Red Hat Software Collections Release Notes for information regarding getting access to nginx, usage of Software Collections, and other. 14.1. The Apache HTTP Server This section focuses on the Apache HTTP Server 2.4 , httpd , an open source web server developed by the Apache Software Foundation . If you are upgrading from a release of Red Hat Enterprise Linux, you will need to update the httpd service configuration accordingly. This section reviews some of the newly added features, outlines important changes between Apache HTTP Server 2.4 and version 2.2, and guides you through the update of older configuration files. 14.1.1. Notable Changes The Apache HTTP Server in Red Hat Enterprise Linux 7 has the following changes compared to Red Hat Enterprise Linux 6: httpd Service Control With the migration away from SysV init scripts, server administrators should switch to using the apachectl and systemctl commands to control the service, in place of the service command. The following examples are specific to the httpd service. The command: is replaced by The systemd unit file for httpd has different behavior from the init script as follows: A graceful restart is used by default when the service is reloaded. A graceful stop is used by default when the service is stopped. The command: is replaced by Private /tmp To enhance system security, the systemd unit file runs the httpd daemon using a private /tmp directory, separate to the system /tmp directory. Configuration Layout Configuration files which load modules are now placed in the /etc/httpd/conf.modules.d/ directory. Packages that provide additional loadable modules for httpd , such as php , will place a file in this directory. An Include directive before the main section of the /etc/httpd/conf/httpd.conf file is used to include files within the /etc/httpd/conf.modules.d/ directory. This means any configuration files within conf.modules.d/ are processed before the main body of httpd.conf . An IncludeOptional directive for files within the /etc/httpd/conf.d/ directory is placed at the end of the httpd.conf file. This means the files within /etc/httpd/conf.d/ are now processed after the main body of httpd.conf . Some additional configuration files are provided by the httpd package itself: /etc/httpd/conf.d/autoindex.conf - This configures mod_autoindex directory indexing. /etc/httpd/conf.d/userdir.conf - This configures access to user directories, for example http://example.com/~username/ ; such access is disabled by default for security reasons. /etc/httpd/conf.d/welcome.conf - As in releases, this configures the welcome page displayed for http://localhost/ when no content is present. Default Configuration A minimal httpd.conf file is now provided by default. Many common configuration settings, such as Timeout or KeepAlive are no longer explicitly configured in the default configuration; hard-coded settings will be used instead, by default. The hard-coded default settings for all configuration directives are specified in the manual. See the section called "Installable Documentation" for more information. Incompatible Syntax Changes If migrating an existing configuration from httpd 2.2 to httpd 2.4 , a number of backwards-incompatible changes to the httpd configuration syntax were made which will require changes. See the following Apache document for more information on upgrading http://httpd.apache.org/docs/2.4/upgrading.html Processing Model In releases of Red Hat Enterprise Linux, different multi-processing models ( MPM ) were made available as different httpd binaries: the forked model, "prefork", as /usr/sbin/httpd , and the thread-based model "worker" as /usr/sbin/httpd.worker . In Red Hat Enterprise Linux 7, only a single httpd binary is used, and three MPMs are available as loadable modules: worker, prefork (default), and event. Edit the configuration file /etc/httpd/conf.modules.d/00-mpm.conf as required, by adding and removing the comment character # so that only one of the three MPM modules is loaded. Packaging Changes The LDAP authentication and authorization modules are now provided in a separate sub-package, mod_ldap . The new module mod_session and associated helper modules are provided in a new sub-package, mod_session . The new modules mod_proxy_html and mod_xml2enc are provided in a new sub-package, mod_proxy_html . These packages are all in the Optional channel. Note Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. Packaging Filesystem Layout The /var/cache/mod_proxy/ directory is no longer provided; instead, the /var/cache/httpd/ directory is packaged with a proxy and ssl subdirectory. Packaged content provided with httpd has been moved from /var/www/ to /usr/share/httpd/ : /usr/share/httpd/icons/ - The directory containing a set of icons used with directory indices, previously contained in /var/www/icons/ , has moved to /usr/share/httpd/icons/ . Available at http://localhost/icons/ in the default configuration; the location and the availability of the icons is configurable in the /etc/httpd/conf.d/autoindex.conf file. /usr/share/httpd/manual/ - The /var/www/manual/ has moved to /usr/share/httpd/manual/ . This directory, contained in the httpd-manual package, contains the HTML version of the manual for httpd . Available at http://localhost/manual/ if the package is installed, the location and the availability of the manual is configurable in the /etc/httpd/conf.d/manual.conf file. /usr/share/httpd/error/ - The /var/www/error/ has moved to /usr/share/httpd/error/ . Custom multi-language HTTP error pages. Not configured by default, the example configuration file is provided at /usr/share/doc/httpd- VERSION /httpd-multilang-errordoc.conf . Authentication, Authorization and Access Control The configuration directives used to control authentication, authorization and access control have changed significantly. Existing configuration files using the Order , Deny and Allow directives should be adapted to use the new Require syntax. See the following Apache document for more information http://httpd.apache.org/docs/2.4/howto/auth.html suexec To improve system security, the suexec binary is no longer installed as if by the root user; instead, it has file system capability bits set which allow a more restrictive set of permissions. In conjunction with this change, the suexec binary no longer uses the /var/log/httpd/suexec.log logfile. Instead, log messages are sent to syslog ; by default these will appear in the /var/log/secure log file. Module Interface Third-party binary modules built against httpd 2.2 are not compatible with httpd 2.4 due to changes to the httpd module interface. Such modules will need to be adjusted as necessary for the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version 2.4 is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html . The apxs binary used to build modules from source has moved from /usr/sbin/apxs to /usr/bin/apxs . Removed modules List of httpd modules removed in Red Hat Enterprise Linux 7: mod_auth_mysql, mod_auth_pgsql httpd 2.4 provides SQL database authentication support internally in the mod_authn_dbd module. mod_perl mod_perl is not officially supported with httpd 2.4 by upstream. mod_authz_ldap httpd 2.4 provides LDAP support in sub-package mod_ldap using mod_authnz_ldap . 14.1.2. Updating the Configuration To update the configuration files from the Apache HTTP Server version 2.2, take the following steps: Make sure all module names are correct, since they may have changed. Adjust the LoadModule directive for each module that has been renamed. Recompile all third party modules before attempting to load them. This typically means authentication and authorization modules. If you use the mod_userdir module, make sure the UserDir directive indicating a directory name (typically public_html ) is provided. If you use the Apache HTTP Secure Server, see Section 14.1.8, "Enabling the mod_ssl Module" for important information on enabling the Secure Sockets Layer (SSL) protocol. Note that you can check the configuration for possible errors by using the following command: For more information on upgrading the Apache HTTP Server configuration from version 2.2 to 2.4, see http://httpd.apache.org/docs/2.4/upgrading.html . 14.1.3. Running the httpd Service This section describes how to start, stop, restart, and check the current status of the Apache HTTP Server. To be able to use the httpd service, make sure you have the httpd installed. You can do so by using the following command: For more information on the concept of targets and how to manage system services in Red Hat Enterprise Linux in general, see Chapter 10, Managing Services with systemd . 14.1.3.1. Starting the Service To run the httpd service, type the following at a shell prompt as root : If you want the service to start automatically at boot time, use the following command: Note If running the Apache HTTP Server as a secure server, a password may be required after the machine boots if using an encrypted private SSL key. 14.1.3.2. Stopping the Service To stop the running httpd service, type the following at a shell prompt as root : To prevent the service from starting automatically at boot time, type: 14.1.3.3. Restarting the Service There are three different ways to restart a running httpd service: To restart the service completely, enter the following command as root : This stops the running httpd service and immediately starts it again. Use this command after installing or removing a dynamically loaded module such as PHP. To only reload the configuration, as root , type: This causes the running httpd service to reload its configuration file. Any requests currently being processed will be interrupted, which may cause a client browser to display an error message or render a partial page. To reload the configuration without affecting active requests, enter the following command as root : This causes the running httpd service to reload its configuration file. Any requests currently being processed will continue to use the old configuration. For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 14.1.3.4. Verifying the Service Status To verify that the httpd service is running, type the following at a shell prompt: 14.1.4. Editing the Configuration Files When the httpd service is started, by default, it reads the configuration from locations that are listed in Table 14.1, "The httpd service configuration files" . Table 14.1. The httpd service configuration files Path Description /etc/httpd/conf/httpd.conf The main configuration file. /etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the main configuration file. Although the default configuration should be suitable for most situations, it is a good idea to become at least familiar with some of the more important configuration options. Note that for any changes to take effect, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. To check the configuration for possible errors, type the following at a shell prompt: To make the recovery from mistakes easier, it is recommended that you make a copy of the original file before editing it. 14.1.5. Working with Modules Being a modular application, the httpd service is distributed along with a number of Dynamic Shared Objects ( DSO s), which can be dynamically loaded or unloaded at runtime as necessary. On Red Hat Enterprise Linux 7, these modules are located in /usr/lib64/httpd/modules/ . 14.1.5.1. Loading a Module To load a particular DSO module, use the LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.d/ directory. Example 14.1. Loading the mod_ssl DSO Once you are finished, restart the web server to reload the configuration. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.5.2. Writing a Module If you intend to create a new DSO module, make sure you have the httpd-devel package installed. To do so, enter the following command as root : This package contains the include files, the header files, and the APache eXtenSion ( apxs ) utility required to compile a module. Once written, you can build the module with the following command: If the build was successful, you should be able to load the module the same way as any other module that is distributed with the Apache HTTP Server. 14.1.6. Setting Up Virtual Hosts The Apache HTTP Server's built in virtual hosting allows the server to provide different information based on which IP address, host name, or port is being requested. To create a name-based virtual host, copy the example configuration file /usr/share/doc/httpd- VERSION /httpd-vhosts.conf into the /etc/httpd/conf.d/ directory, and replace the @@Port@@ and @@ServerRoot@@ placeholder values. Customize the options according to your requirements as shown in Example 14.2, "Example virtual host configuration" . Example 14.2. Example virtual host configuration Note that ServerName must be a valid DNS name assigned to the machine. The <VirtualHost> container is highly customizable, and accepts most of the directives available within the main server configuration. Directives that are not supported within this container include User and Group , which were replaced by SuexecUserGroup . Note If you configure a virtual host to listen on a non-default port, make sure you update the Listen directive in the global settings section of the /etc/httpd/conf/httpd.conf file accordingly. To activate a newly created virtual host, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.7. Setting Up an SSL Server Secure Sockets Layer ( SSL ) is a cryptographic protocol that allows a server and a client to communicate securely. Along with its extended and improved version called Transport Layer Security ( TLS ), it ensures both privacy and data integrity. The Apache HTTP Server in combination with mod_ssl , a module that uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server . Red Hat Enterprise Linux also supports the use of Mozilla NSS as the TLS implementation. Support for Mozilla NSS is provided by the mod_nss module. Unlike an HTTP connection that can be read and possibly modified by anybody who is able to intercept it, the use of SSL/TLS over HTTP, referred to as HTTPS, prevents any inspection or modification of the transmitted content. This section provides basic information on how to enable this module in the Apache HTTP Server configuration, and guides you through the process of generating private keys and self-signed certificates. 14.1.7.1. An Overview of Certificates and Security Secure communication is based on the use of keys. In conventional or symmetric cryptography , both ends of the transaction have the same key they can use to decode each other's transmissions. On the other hand, in public or asymmetric cryptography , two keys co-exist: a private key that is kept a secret, and a public key that is usually shared with the public. While the data encoded with the public key can only be decoded with the private key, data encoded with the private key can in turn only be decoded with the public key. To provide secure communications using SSL, an SSL server must use a digital certificate signed by a Certificate Authority ( CA ). The certificate lists various attributes of the server (that is, the server host name, the name of the company, its location, etc.), and the signature produced using the CA's private key. This signature ensures that a particular certificate authority has signed the certificate, and that the certificate has not been modified in any way. When a web browser establishes a new SSL connection, it checks the certificate provided by the web server. If the certificate does not have a signature from a trusted CA, or if the host name listed in the certificate does not match the host name used to establish the connection, it refuses to communicate with the server and usually presents a user with an appropriate error message. By default, most web browsers are configured to trust a set of widely used certificate authorities. Because of this, an appropriate CA should be chosen when setting up a secure server, so that target users can trust the connection, otherwise they will be presented with an error message, and will have to accept the certificate manually. Since encouraging users to override certificate errors can allow an attacker to intercept the connection, you should use a trusted CA whenever possible. For more information on this, see Table 14.2, "Information about CA lists used by common web browsers" . Table 14.2. Information about CA lists used by common web browsers Web Browser Link Mozilla Firefox Mozilla root CA list . Opera Information on root certificates used by Opera . Internet Explorer Information on root certificates used by Microsoft Windows . Chromium Information on root certificates used by the Chromium project . When setting up an SSL server, you need to generate a certificate request and a private key, and then send the certificate request, proof of the company's identity, and payment to a certificate authority. Once the CA verifies the certificate request and your identity, it will send you a signed certificate you can use with your server. Alternatively, you can create a self-signed certificate that does not contain a CA signature, and thus should be used for testing purposes only. 14.1.8. Enabling the mod_ssl Module If you intend to set up an SSL or HTTPS server using mod_ssl , you cannot have the another application or module, such as mod_nss configured to use the same port. Port 443 is the default port for HTTPS. To set up an SSL server using the mod_ssl module and the OpenSSL toolkit, install the mod_ssl and openssl packages. Enter the following command as root : This will create the mod_ssl configuration file at /etc/httpd/conf.d/ssl.conf , which is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.8.1. Enabling and Disabling SSL and TLS in mod_ssl To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the SSLProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol support" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify SSLProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable SSLv2 and SSLv3 To disable SSL version 2 and SSL version 3, which implies enabling everything except SSL version 2 and SSL version 3, in all VirtualHost sections, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of the SSLProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the SSLProtocol line as follows: Repeat this action for all VirtualHost sections. Save and close the file. Verify that all occurrences of the SSLProtocol directive have been changed as follows: This step is particularly important if you have more than the one default VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Disable All SSL and TLS Protocols Except TLS 1 and Up To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of SSLProtocol directive. By default the file contains one section that looks as follows: Edit the SSLProtocol line as follows: Save and close the file. Verify the change as follows: Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols To check which versions of SSL and TLS are enabled or disabled, make use of the openssl s_client -connect command. The command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.9. Enabling the mod_nss Module If you intend to set up an HTTPS server using mod_nss , you cannot have the mod_ssl package installed with its default settings as mod_ssl will use port 443 by default, however this is the default HTTPS port. If at all possible, remove the package. To remove mod_ssl , enter the following command as root : Note If mod_ssl is required for other purposes, modify the /etc/httpd/conf.d/ssl.conf file to use a port other than 443 to prevent mod_ssl conflicting with mod_nss when its port to listen on is changed to 443 . Only one module can own a port, therefore mod_nss and mod_ssl can only co-exist at the same time if they use unique ports. For this reason mod_nss by default uses 8443 , but the default port for HTTPS is port 443 . The port is specified by the Listen directive as well as in the VirtualHost name or address. Everything in NSS is associated with a "token". The software token exists in the NSS database but you can also have a physical token containing certificates. With OpenSSL, discrete certificates and private keys are held in PEM files. With NSS, these are stored in a database. Each certificate and key is associated with a token and each token can have a password protecting it. This password is optional, but if a password is used then the Apache HTTP server needs a copy of it in order to open the database without user intervention at system start. Configuring mod_nss Install mod_nss as root : This will create the mod_nss configuration file at /etc/httpd/conf.d/nss.conf . The /etc/httpd/conf.d/ directory is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the Listen directive. Edit the Listen 8443 line as follows: Port 443 is the default port for HTTPS . Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Mozilla NSS stores certificates in a server certificate database indicated by the NSSCertificateDatabase directive in the /etc/httpd/conf.d/nss.conf file. By default the path is set to /etc/httpd/alias , the NSS database created during installation. To view the default NSS database, issue a command as follows: In the above command output, Server-Cert is the default NSSNickname . The -L option lists all the certificates, or displays information about a named certificate, in a certificate database. The -d option specifies the database directory containing the certificate and key database files. See the certutil(1) man page for more command line options. To configure mod_nss to use another database, edit the NSSCertificateDatabase line in the /etc/httpd/conf.d/nss.conf file. The default file has the following lines within the VirtualHost section. In the above command output, alias is the default NSS database directory, /etc/httpd/alias/ . To apply a password to the default NSS certificate database, use the following command as root : Before deploying the HTTPS server, create a new certificate database using a certificate signed by a certificate authority (CA). Example 14.3. Adding a Certificate to the Mozilla NSS database The certutil command is used to add a CA certificate to the NSS database files: The above command adds a CA certificate stored in a PEM-formatted file named certificate.pem . The -d option specifies the NSS database directory containing the certificate and key database files, the -n option sets a name for the certificate, -t CT,, means that the certificate is trusted to be used in TLS clients and servers. The -A option adds an existing certificate to a certificate database. If the database does not exist it will be created. The -a option allows the use of ASCII format for input or output, and the -i option passes the certificate.pem input file to the command. See the certutil(1) man page for more command line options. The NSS database should be password protected to safeguard the private key. Example 14.4. Setting a Password for a Mozilla NSS database The certutil tool can be used set a password for an NSS database as follows: For example, for the default database, issue a command as root as follows: Configure mod_nss to use the NSS internal software token by changing the line with the NSSPassPhraseDialog directive as follows: This is to avoid manual password entry on system start. The software token exists in the NSS database but you can also have a physical token containing your certificates. If the SSL Server Certificate contained in the NSS database is an RSA certificate, make certain that the NSSNickname parameter is uncommented and matches the nickname displayed in step 4 above: If the SSL Server Certificate contained in the NSS database is an ECC certificate, make certain that the NSSECCNickname parameter is uncommented and matches the nickname displayed in step 4 above: Make certain that the NSSCertificateDatabase parameter is uncommented and points to the NSS database directory displayed in step 4 or configured in step 5 above: Replace /etc/httpd/alias with the path to the certificate database to be used. Create the /etc/httpd/password.conf file as root : Add a line with the following form: Replacing password with the password that was applied to the NSS security databases in step 6 above. Apply the appropriate ownership and permissions to the /etc/httpd/password.conf file: To configure mod_nss to use the NSS the software token in /etc/httpd/password.conf , edit /etc/httpd/conf.d/nss.conf as follows: Restart the Apache server for the changes to take effect as described in Section 14.1.3.3, "Restarting the Service" Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.9.1. Enabling and Disabling SSL and TLS in mod_nss To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the NSSProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify NSSProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable All SSL and TLS Protocols Except TLS 1 and Up in mod_nss To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the NSSProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the NSSProtocol line as follows: Repeat this action for all VirtualHost sections. Edit the Listen 8443 line as follows: Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Verify that all occurrences of the NSSProtocol directive have been changed as follows: This step is particularly important if you have more than one VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols in mod_nss To check which versions of SSL and TLS are enabled or disabled in mod_nss , make use of the openssl s_client -connect command. Install the openssl package as root : The openssl s_client -connect command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.10. Using an Existing Key and Certificate If you have a previously created key and certificate, you can configure the SSL server to use these files instead of generating new ones. There are only two situations where this is not possible: You are changing the IP address or domain name. Certificates are issued for a particular IP address and domain name pair. If one of these values changes, the certificate becomes invalid. You have a certificate from VeriSign, and you are changing the server software. VeriSign, a widely used certificate authority, issues certificates for a particular software product, IP address, and domain name. Changing the software product renders the certificate invalid. In either of the above cases, you will need to obtain a new certificate. For more information on this topic, see Section 14.1.11, "Generating a New Key and Certificate" . If you want to use an existing key and certificate, move the relevant files to the /etc/pki/tls/private/ and /etc/pki/tls/certs/ directories respectively. You can do so by issuing the following commands as root : Then add the following lines to the /etc/httpd/conf.d/ssl.conf configuration file: To load the updated configuration, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Example 14.5. Using a key and certificate from the Red Hat Secure Web Server 14.1.11. Generating a New Key and Certificate In order to generate a new key and certificate pair, the crypto-utils package must be installed on the system. To install it, enter the following command as root : This package provides a set of tools to generate and manage SSL certificates and private keys, and includes genkey , the Red Hat Keypair Generation utility that will guide you through the key generation process. Important If the server already has a valid certificate and you are replacing it with a new one, specify a different serial number. This ensures that client browsers are notified of this change, update to this new certificate as expected, and do not fail to access the page. To create a new certificate with a custom serial number, as root , use the following command instead of genkey : Note If there already is a key file for a particular host name in your system, genkey will refuse to start. In this case, remove the existing file using the following command as root : To run the utility enter the genkey command as root , followed by the appropriate host name (for example, penguin.example.com ): To complete the key and certificate creation, take the following steps: Review the target locations in which the key and certificate will be stored. Figure 14.1. Running the genkey utility Use the Tab key to select the button, and press Enter to proceed to the screen. Using the up and down arrow keys, select a suitable key size. Note that while a larger key increases the security, it also increases the response time of your server. The NIST recommends using 2048 bits . See NIST Special Publication 800-131A . Figure 14.2. Selecting the key size Once finished, use the Tab key to select the button, and press Enter to initiate the random bits generation process. Depending on the selected key size, this may take some time. Decide whether you want to send a certificate request to a certificate authority. Figure 14.3. Generating a certificate request Use the Tab key to select Yes to compose a certificate request, or No to generate a self-signed certificate. Then press Enter to confirm your choice. Using the Spacebar key, enable ( [*] ) or disable ( [ ] ) the encryption of the private key. Figure 14.4. Encrypting the private key Use the Tab key to select the button, and press Enter to proceed to the screen. If you have enabled the private key encryption, enter an adequate passphrase. Note that for security reasons, it is not displayed as you type, and it must be at least five characters long. Figure 14.5. Entering a passphrase Use the Tab key to select the button, and press Enter to proceed to the screen. Important Entering the correct passphrase is required in order for the server to start. If you lose it, you will need to generate a new key and certificate. Customize the certificate details. Figure 14.6. Specifying certificate information Use the Tab key to select the button, and press Enter to finish the key generation. If you have previously enabled the certificate request generation, you will be prompted to send it to a certificate authority. Figure 14.7. Instructions on how to send a certificate request Press Enter to return to a shell prompt. Once generated, add the key and certificate locations to the /etc/httpd/conf.d/ssl.conf configuration file: Finally, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" , so that the updated configuration is loaded. 14.1.12. Configure the Firewall for HTTP and HTTPS Using the Command Line Red Hat Enterprise Linux does not allow HTTP and HTTPS traffic by default. To enable the system to act as a web server, make use of firewalld 's supported services to enable HTTP and HTTPS traffic to pass through the firewall as required. To enable HTTP using the command line, issue the following command as root : To enable HTTPS using the command line, issue the following command as root : Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. 14.1.12.1. Checking Network Access for Incoming HTTPS and HTTPS Using the Command Line To check what services the firewall is configured to allow, using the command line, issue the following command as root : In this example taken from a default installation, the firewall is enabled but HTTP and HTTPS have not been allowed to pass through. Once the HTTP and HTTP firewall services are enabled, the services line will appear similar to the following: For more information on enabling firewall services, or opening and closing ports with firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 14.1.13. Additional Resources To learn more about the Apache HTTP Server, see the following resources. Installed Documentation httpd(8) - The manual page for the httpd service containing the complete list of its command-line options. genkey(1) - The manual page for genkey utility, provided by the crypto-utils package. apachectl(8) - The manual page for the Apache HTTP Server Control Interface. Installable Documentation http://localhost/manual/ - The official documentation for the Apache HTTP Server with the full description of its directives and available modules. Note that in order to access this documentation, you must have the httpd-manual package installed, and the web server must be running. Before accessing the documentation, issue the following commands as root : Online Documentation http://httpd.apache.org/ - The official website for the Apache HTTP Server with documentation on all the directives and default modules. http://www.openssl.org/ - The OpenSSL home page containing further documentation, frequently asked questions, links to the mailing lists, and other useful resources.
[ "service httpd graceful", "apachectl graceful", "service httpd configtest", "apachectl configtest", "~]# apachectl configtest Syntax OK", "~]# yum install httpd", "~]# systemctl start httpd.service", "~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.", "~]# systemctl stop httpd.service", "~]# systemctl disable httpd.service Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service.", "~]# systemctl restart httpd.service", "~]# systemctl reload httpd.service", "~]# apachectl graceful", "~]# systemctl is-active httpd.service active", "~]# apachectl configtest Syntax OK", "LoadModule ssl_module modules/mod_ssl.so", "~]# yum install httpd-devel", "~]# apxs -i -a -c module_name.c", "<VirtualHost *:80> ServerAdmin [email protected] DocumentRoot \"/www/docs/penguin.example.com\" ServerName penguin.example.com ServerAlias www.penguin.example.com ErrorLog \"/var/log/httpd/dummy-host.example.com-error_log\" CustomLog \"/var/log/httpd/dummy-host.example.com-access_log\" common </VirtualHost>", "~]# yum install mod_ssl openssl", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 -SSLv3", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol all -SSLv2 -SSLv3", "~]# systemctl restart httpd", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# systemctl restart httpd", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 139809943877536:error:14094410:SSL routines:SSL3_READ_BYTES: sslv3 alert handshake failure :s3_pkt.c:1257:SSL alert number 40 139809943877536:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES: ssl handshake failure :s3_pkt.c:596: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1_2 CONNECTED(00000003) depth=0 C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = localhost.localdomain, emailAddress = [email protected] output omitted New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 output truncated", "~]# yum remove mod_ssl", "~]# yum install mod_nss", "Listen 443", "VirtualHost default :443", "~]# certutil -L -d /etc/httpd/alias Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI cacert CTu,Cu,Cu Server-Cert u,u,u alpha u,pu,u", "Server Certificate Database: The NSS security database directory that holds the certificates and keys. The database consists of 3 files: cert8.db, key3.db and secmod.db. Provide the directory that these files exist. NSSCertificateDatabase /etc/httpd/alias", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "certutil -d /etc/httpd/nss-db-directory/ -A -n \" CA_certificate \" -t CT,, -a -i certificate.pem", "certutil -W -d /etc/httpd/ nss-db-directory /", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "~]# vi /etc/httpd/conf.d/nss.conf NSSPassPhraseDialog file:/etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf NSSNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSECCNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSCertificateDatabase /etc/httpd/alias", "~]# vi /etc/httpd/password.conf", "internal: password", "~]# chgrp apache /etc/httpd/password.conf ~]# chmod 640 /etc/httpd/password.conf ~]# ls -l /etc/httpd/password.conf -rw-r-----. 1 root apache 10 Dec 4 17:13 /etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf", "~]# vi /etc/httpd/conf.d/nss.conf SSL Protocol: output omitted Since all protocol ranges are completely inclusive, and no protocol in the middle of a range may be excluded, the entry \"NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \"NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol SSLv3,TLSv1.0,TLSv1.1", "SSL Protocol: NSSProtocol TLSv1.0,TLSv1.1", "Listen 443", "VirtualHost default :443", "~]# grep NSSProtocol /etc/httpd/conf.d/nss.conf middle of a range may be excluded, the entry \" NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \" NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol TLSv1.0,TLSv1.1", "~]# service httpd restart", "~]# yum install openssl", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 3077773036:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:337: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1 CONNECTED(00000003) depth=1 C = US, O = example.com, CN = Certificate Shack output omitted New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 output truncated", "~]# mv key_file.key /etc/pki/tls/private/hostname.key ~]# mv certificate.crt /etc/pki/tls/certs/hostname.crt", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# mv /etc/httpd/conf/httpsd.key /etc/pki/tls/private/penguin.example.com.key ~]# mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt", "~]# yum install crypto-utils", "~]# openssl req -x509 -new -set_serial number -key hostname.key -out hostname.crt", "~]# rm /etc/pki/tls/private/hostname.key", "~]# genkey hostname", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# firewall-cmd --add-service http success", "~]# firewall-cmd --add-service https success", "~]# firewall-cmd --list-all public (default, active) interfaces: em1 sources: services: dhcpv6-client ssh output truncated", "services: dhcpv6-client http https ssh", "~] yum install httpd-manual ~] apachectl graceful" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Web_Servers
Chapter 13. Porting containers to OpenShift using Podman
Chapter 13. Porting containers to OpenShift using Podman You can generate portable descriptions of containers and pods by using the YAML ("YAML Ain't Markup Language") format. The YAML is a text format used to describe the configuration data. The YAML files are: Readable. Easy to generate. Portable between environments (for example between RHEL and OpenShift). Portable between programming languages. Convenient to use (no need to add all the parameters to the command line). Reasons to use YAML files: You can re-run a local orchestrated set of containers and pods with minimal input required which can be useful for iterative development. You can run the same containers and pods on another machine. For example, to run an application in an OpenShift environment and to ensure that the application is working correctly. You can use podman generate kube command to generate a Kubernetes YAML file. Then, you can use podman play command to test the creation of pods and containers on your local system before you transfer the generated YAML files to the Kubernetes or OpenShift environment. Using the podman play command, you can also recreate pods and containers originally created in OpenShift or Kubernetes environments. Note The podman kube play command supports a subset of Kubernetes YAML capabilities. For more information, see the support matrix of supported YAML fields . 13.1. Generating a Kubernetes YAML file using Podman You can create a pod with one container and generate the Kubernetes YAML file using the podman generate kube command. Prerequisites The container-tools module is installed. The pod has been created. For details, see section Creating pods . Procedure List all pods and containers associated with them: Use the pod name or ID to generate the Kubernetes YAML file: Note that the podman generate command does not reflect any Logical Volume Manager (LVM) logical volumes or physical volumes that might be attached to the container. Display the mypod.yaml file: Additional resources podman-generate-kube man page on your system Podman: Managing pods and containers in a local container runtime 13.2. Generating a Kubernetes YAML file in OpenShift environment In the OpenShift environment, use the oc create command to generate the YAML files describing your application. Procedure Generate the YAML file for your myapp application: The oc create command creates and run the myapp image. The object is printed using the --dry-run option and redirected into the myapp.yaml output file. Note In the Kubernetes environment, you can use the kubectl create command with the same flags. 13.3. Starting containers and pods with Podman With the generated YAML files, you can automatically start containers and pods in any environment. The YAML files can be generated using tools other than Podman, such as Kubernetes or Openshift. The podman play kube command allows you to recreate pods and containers based on the YAML input file. Prerequisites The container-tools module is installed. Procedure Create the pod and the container from the mypod.yaml file: List all pods: List all pods and containers associated with them: The pod IDs from podman ps command matches the pod ID from the podman pod ps command. Additional resources podman-play-kube man page on your system Podman can now ease the transition to Kubernetes and CRI-O 13.4. Starting containers and pods in OpenShift environment You can use the oc create command to create pods and containers in the OpenShift environment. Procedure Create a pod from the YAML file in the OpenShift environment: Note In the Kubernetes environment, you can use the kubectl create command with the same flags. 13.5. Manually running containers and pods using Podman The following procedure shows how to manually create a WordPress content management system paired with a MariaDB database using Podman. Suppose the following directory layout: Prerequisites The container-tools module is installed. Procedure Display the mariadb-conf/Containerfile file: Display the mariadb-conf/my.cnf file: Build the docker.io/library/mariadb image using mariadb-conf/Containerfile : Optional: List all images: Create the pod named wordpresspod and configure port mappings between the container and the host system: Create the mydb container inside the wordpresspod pod: Create the myweb container inside the wordpresspod pod: Optional: List all pods and containers associated with them: Verification Verify that the pod is running: Visit the http://localhost:8080/wp-admin/install.php page or use the curl command: Additional resources Build Kubernetes pods with Podman play kube podman-play-kube man page on your system 13.6. Generating a YAML file using Podman You can generate a Kubernetes YAML file using the podman generate kube command. Prerequisites The container-tools module is installed. The pod named wordpresspod has been created. For details, see section Creating pods . Procedure List all pods and containers associated with them: Use the pod name or ID to generate the Kubernetes YAML file: Verification Display the wordpresspod.yaml file: Additional resources Build Kubernetes pods with Podman play kube podman-play-kube man page on your system 13.7. Automatically running containers and pods using Podman You can use the podman play kube command to test the creation of pods and containers on your local system before you transfer the generated YAML files to the Kubernetes or OpenShift environment. The podman play kube command can also automatically build and run multiple pods with multiple containers in the pod using the YAML file similarly to the docker compose command. The images are automatically built if the following conditions are met: a directory with the same name as the image used in YAML file exists that directory contains a Containerfile Prerequisites The container-tools module is installed. The pod named wordpresspod has been created. For details, see section Manually running containers and pods using Podman . The YAML file has been generated. For details, see section Generating a YAML file using Podman . To repeat the whole scenario from the beginning, delete locally stored images: Procedure Create the wordpress pod using the wordpress.yaml file: The podman play kube command: Automatically build the localhost/mariadb-conf:latest image based on docker.io/library/mariadb image. Pull the docker.io/library/wordpress:latest image. Create a pod named wordpresspod with two containers named wordpresspod-mydb and wordpresspod-myweb . List all containers and pods: Verification Verify that the pod is running: Visit the http://localhost:8080/wp-admin/install.php page or use the curl command: Additional resources Build Kubernetes pods with Podman play kube podman-play-kube man page on your system 13.8. Automatically stopping and removing pods using Podman The podman play kube --down command stops and removes all pods and their containers. Note If a volume is in use, it is not removed. Prerequisites The container-tools module is installed. The pod named wordpresspod has been created. For details, see section Manually running containers and pods using Podman . The YAML file has been generated. For details, see section Generating a YAML file using Podman . The pod is running. For details, see section Automatically running containers and pods using Podman . Procedure Remove all pods and containers created by the wordpresspod.yaml file: Verification Verify that all pods and containers created by the wordpresspod.yaml file were removed: Additional resources Build Kubernetes pods with Podman play kube podman-play-kube man page on your system
[ "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 5df5c48fea87 registry.access.redhat.com/ubi8/ubi:latest /bin/bash Less than a second ago Up Less than a second ago myubi 223df6b390b4 3afdcd93de3e k8s.gcr.io/pause:3.1 Less than a second ago Up Less than a second ago 223df6b390b4-infra 223df6b390b4", "podman generate kube mypod > mypod.yaml", "cat mypod.yaml Generation of Kubernetes YAML is still under development! # Save the output of this file and use kubectl create -f to import it into Kubernetes. # Created with podman-1.6.4 apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2020-06-09T10:31:56Z\" labels: app: mypod name: mypod spec: containers: - command: - /bin/bash env: - name: PATH value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: TERM value: xterm - name: HOSTNAME - name: container value: oci image: registry.access.redhat.com/ubi8/ubi:latest name: myubi resources: {} securityContext: allowPrivilegeEscalation: true capabilities: {} privileged: false readOnlyRootFilesystem: false tty: true workingDir: / status: {}", "oc create myapp --image=me/myapp:v1 -o yaml --dry-run > myapp.yaml", "podman play kube mypod.yaml Pod: b8c5b99ba846ccff76c3ef257e5761c2d8a5ca4d7ffa3880531aec79c0dacb22 Container: 848179395ebd33dd91d14ffbde7ae273158d9695a081468f487af4e356888ece", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID b8c5b99ba846 mypod Running 19 seconds ago 2 aa4220eaf4bb", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 848179395ebd registry.access.redhat.com/ubi8/ubi:latest /bin/bash About a minute ago Up About a minute ago myubi b8c5b99ba846 aa4220eaf4bb k8s.gcr.io/pause:3.1 About a minute ago Up About a minute ago b8c5b99ba846-infra b8c5b99ba846", "oc create -f mypod.yaml", "โ”œโ”€โ”€ mariadb-conf โ”‚ โ”œโ”€โ”€ Containerfile โ”‚ โ”œโ”€โ”€ my.cnf", "cat mariadb-conf/Containerfile FROM docker.io/library/mariadb COPY my.cnf /etc/mysql/my.cnf", "Port or socket location where to connect port = 3306 socket = /run/mysqld/mysqld.sock Import all .cnf files from the configuration directory [mariadbd] skip-host-cache skip-name-resolve bind-address = 127.0.0.1 !includedir /etc/mysql/mariadb.conf.d/ !includedir /etc/mysql/conf.d/", "cd mariadb-conf podman build -t mariadb-conf . cd .. STEP 1: FROM docker.io/library/mariadb Trying to pull docker.io/library/mariadb:latest Getting image source signatures Copying blob 7b1a6ab2e44d done Storing signatures STEP 2: COPY my.cnf /etc/mysql/my.cnf STEP 3: COMMIT mariadb-conf --> ffae584aa6e Successfully tagged localhost/mariadb-conf:latest ffae584aa6e733ee1cdf89c053337502e1089d1620ff05680b6818a96eec3c17", "podman images LIST IMAGES REPOSITORY TAG IMAGE ID CREATED SIZE localhost/mariadb-conf latest b66fa0fa0ef2 57 seconds ago 416 MB", "podman pod create --name wordpresspod -p 8080:80", "podman run --detach --pod wordpresspod -e MYSQL_ROOT_PASSWORD=1234 -e MYSQL_DATABASE=mywpdb -e MYSQL_USER=mywpuser -e MYSQL_PASSWORD=1234 --name mydb localhost/mariadb-conf", "podman run --detach --pod wordpresspod -e WORDPRESS_DB_HOST=127.0.0.1 -e WORDPRESS_DB_NAME=mywpdb -e WORDPRESS_DB_USER=mywpuser -e WORDPRESS_DB_PASSWORD=1234 --name myweb docker.io/wordpress", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 9ea56f771915 k8s.gcr.io/pause:3.5 Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp 4b7f054a6f01-infra 4b7f054a6f01 wordpresspod 60e8dbbabac5 localhost/mariadb-conf:latest mariadbd Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp mydb 4b7f054a6f01 wordpresspod 045d3d506e50 docker.io/library/wordpress:latest apache2-foregroun... Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp myweb 4b7f054a6f01 wordpresspod", "curl http://localhost:8080/wp-admin/install.php <!DOCTYPE html> <html lang=\"en-US\" xml:lang=\"en-US\"> <head> </head> <body class=\"wp-core-ui\"> <p id=\"logo\">WordPress</p> <h1>Welcome</h1>", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 9ea56f771915 k8s.gcr.io/pause:3.5 Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp 4b7f054a6f01-infra 4b7f054a6f01 wordpresspod 60e8dbbabac5 localhost/mariadb-conf:latest mariadbd Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp mydb 4b7f054a6f01 wordpresspod 045d3d506e50 docker.io/library/wordpress:latest apache2-foregroun... Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp myweb 4b7f054a6f01 wordpresspod", "podman generate kube wordpresspod >> wordpresspod.yaml", "cat wordpresspod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2021-12-09T15:09:30Z\" labels: app: wordpresspod name: wordpresspod spec: containers: - args: value: podman - name: MYSQL_PASSWORD value: \"1234\" - name: MYSQL_MAJOR value: \"8.0\" - name: MYSQL_VERSION value: 8.0.27-1debian10 - name: MYSQL_ROOT_PASSWORD value: \"1234\" - name: MYSQL_DATABASE value: mywpdb - name: MYSQL_USER value: mywpuser image: mariadb name: mydb ports: - containerPort: 80 hostPort: 8080 protocol: TCP - args: - name: WORDPRESS_DB_NAME value: mywpdb - name: WORDPRESS_DB_PASSWORD value: \"1234\" - name: WORDPRESS_DB_HOST value: 127.0.0.1 - name: WORDPRESS_DB_USER value: mywpuser image: docker.io/library/wordpress:latest name: myweb", "podman rmi localhost/mariadb-conf podman rmi docker.io/library/wordpress podman rmi docker.io/library/mysql", "podman play kube wordpress.yaml STEP 1/2: FROM docker.io/library/mariadb STEP 2/2: COPY my.cnf /etc/mysql/my.cnf COMMIT localhost/mariadb-conf:latest --> 428832c45d0 Successfully tagged localhost/mariadb-conf:latest 428832c45d07d78bb9cb34e0296a7dc205026c2fe4d636c54912c3d6bab7f399 Trying to pull docker.io/library/wordpress:latest Getting image source signatures Copying blob 99c3c1c4d556 done Storing signatures Pod: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac Containers: 6c59ebe968467d7fdb961c74a175c88cb5257fed7fb3d375c002899ea855ae1f 29717878452ff56299531f79832723d3a620a403f4a996090ea987233df0bc3d", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME a1dbf7b5606c k8s.gcr.io/pause:3.5 3 minutes ago Up 2 minutes ago 0.0.0.0:8080->80/tcp 3e391d091d19-infra 3e391d091d19 wordpresspod 6c59ebe96846 localhost/mariadb-conf:latest mariadbd 2 minutes ago Exited (1) 2 minutes ago 0.0.0.0:8080->80/tcp wordpresspod-mydb 3e391d091d19 wordpresspod 29717878452f docker.io/library/wordpress:latest apache2-foregroun... 2 minutes ago Up 2 minutes ago 0.0.0.0:8080->80/tcp wordpresspod-myweb 3e391d091d19 wordpresspod", "curl http://localhost:8080/wp-admin/install.php <!DOCTYPE html> <html lang=\"en-US\" xml:lang=\"en-US\"> <head> </head> <body class=\"wp-core-ui\"> <p id=\"logo\">WordPress</p> <h1>Welcome</h1>", "podman play kube --down wordpresspod.yaml Pods stopped: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac Pods removed: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_porting-containers-to-openshift-using-podman_building-running-and-managing-containers
A.4. Generating LDAP URLs
A.4. Generating LDAP URLs LDAP URLs are used in a variety of different configuration areas and operations: referrals and chaining, replication, synchronization, ACIs, and indexing, as a starting list. Constructing accurate LDAP URLs is critical, because incorrect URLs may connect to the wrong server or simply cause operations to fail. Additionally, all OpenLDAP tools allow the -H option to pass an LDAP URL instead of other connection information (like the host name, port, subtree, and search base). Note LDAP URLs are described in Appendix C, LDAP URLs . The ldapurl command manages URL in two ways: Deconstruct a given LDAP URL into its constituent element Construct a new, valid LDAP URL from given elements The parameters for working with URLs are listed in Table A.1, "ldapurl Parameters" ; the full list of parameters are in the OpenLDAP manpages. Table A.1. ldapurl Parameters Option Description For Deconstructing a URL -H " URL " Passes the LDAP URL to break down into elements. For Constructing a URL -a attributes Gives a comma-separated attributes that are specifically returned in search results. -b base Sets the search base or subtree for the URL. -f filter Sets the search filter to use. -h hostname Gives the Directory Server's host name. -p port Gives the Directory Server's port. -S ldap|ldaps|ldapi Gives the protocol to use to connect, such as ldap , ldaps , or ldapi . -s scope Gives the search scope. Example A.8. Deconstructing an LDAP URL ldapurl uses the -H option to feed in an existing LDAP URL, and the tool returns the elements of the URL in a neat list: Example A.9. Constructing an LDAP URL The most useful application of ldapurl is to construct a valid LDAP URL manually. Using ldapurl ensures that the URL is valid. ldapurl accepts the normal connection parameters of all LDAP client tools and additional ldapsearch arguments for search base, scope, and attributes, but this tool never connects to a Directory Server instance, so it does not require any bind information. It accepts the connection and search settings and feeds them in as elements to the URL. ldapurl -a cn,sn -b dc=example,dc=com -s sub -f "(objectclass=inetorgperson)" ldap://:389/dc=example,dc=com?cn,sn?sub?(objectclass=inetorgperson)
[ "ldapurl -H \"ldap://:389/dc=example,dc=com?cn,sn?sub?(objectclass=inetorgperson)\" scheme: ldap port: 389 dn: dc=example,dc=com selector: cn selector: sn scope: sub filter: (objectclass=inetorgperson)", "ldapurl -a cn,sn -b dc=example,dc=com -s sub -f \"(objectclass=inetorgperson)\" ldap://:389/dc=example,dc=com?cn,sn?sub?(objectclass=inetorgperson)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldapurl
probe::socket.writev
probe::socket.writev Name probe::socket.writev - Message sent via socket_writev Synopsis socket.writev Values state Socket state value protocol Protocol value name Name of this probe family Protocol family value size Message size in bytes type Socket type value flags Socket flags value Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_writev function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-writev
Chapter 6. Packaging and Deploying the Translator
Chapter 6. Packaging and Deploying the Translator 6.1. Packaging Once the "ExecutionFactory" class is implemented, package it in a JAR file. Then add the following named file in "META-INF/services/org.teiid.translator.ExecutionFactory" with contents specifying the name of your main Translator file. Note that, the name must exactly match to above. This is Java's standard service loader pattern. This will register the Translator for deployment when the JAR is deployed.
[ "org.teiid.translator.custom.CustomExecutionFactory" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-packaging_and_deploying_the_translator
Chapter 1. Features
Chapter 1. Features AMQ Streams version 1.6 is based on Strimzi 0.20.x. The features added in this release, and that were not in releases of AMQ Streams, are outlined below. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 1.1. Kafka support in AMQ Streams 1.6.x (Long Term Support on OCP 3.11) This section describes the versions of Kafka and ZooKeeper that are supported in AMQ Streams 1.6 and the subsequent patch releases. AMQ Streams 1.6.x is the Long Term Support release for use with OCP 3.11, and is supported only for as long as OpenShift Container Platform 3.11 is supported. Note AMQ Streams 1.6.4 and later patch releases are supported on OCP 3.11 only. If you are using OCP 4.x you are required to upgrade to AMQ Streams 1.7.x or later. For information on support dates for AMQ LTS versions, see the Red Hat Knowledgebase solution How long are AMQ LTS releases supported? . Only Kafka distributions built by Red Hat are supported. versions of Kafka are supported in AMQ Streams 1.6.x only for upgrade purposes. For more information on supported Kafka versions, see the Red Hat AMQ 7 Component Details Page on the Customer Portal. 1.1.1. Kafka support in AMQ Streams 1.6.6 and 1.6.7 The AMQ Streams 1.6.6 and 1.6.7 releases support Apache Kafka version 2.6.3. You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.3. For upgrade instructions, see AMQ Streams and Kafka upgrades . Kafka 2.6.3 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator does not perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.6.4 / 1.6.5. Refer to the Kafka 2.6.3 Release Notes for additional information. 1.1.2. Kafka support in AMQ Streams 1.6.4 and 1.6.5 The AMQ Streams 1.6.4 and 1.6.5 releases support Apache Kafka version 2.6.2 and ZooKeeper version 3.5.9. You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.2. For upgrade instructions, see AMQ Streams and Kafka upgrades . Kafka 2.6.2 requires ZooKeeper version 3.5.9. Therefore, the Cluster Operator will perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.6.2. Refer to the Kafka 2.6.2 Release Notes for additional information. 1.1.3. Kafka support in AMQ Streams 1.6.0 and 1.6.2 AMQ Streams 1.6.0 and 1.6.2 support Apache Kafka version 2.6.0. You must upgrade the Cluster Operator before you can upgrade brokers and client applications to Kafka 2.6.0. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.5.0 and Kafka 2.6.0 Release Notes for additional information. Kafka 2.6.0 requires the same ZooKeeper version as Kafka 2.5.x (ZooKeeper version 3.5.7 / 3.5.8). Therefore, the Cluster Operator does not perform a ZooKeeper upgrade when upgrading from AMQ Streams 1.5. 1.2. Container images move to Java 11 AMQ Streams container images move to Java 11 as the Java runtime environment (JRE). The JRE version in the images changes from OpenJDK 8 to OpenJDK 11. 1.3. Cluster Operator logging Cluster Operator logging is now configured using a ConfigMap that is automatically created when the Cluster Operator is deployed. The ConfigMap is described in the following new YAML file: To configure Cluster Operator logging: In the 050-ConfigMap-strimzi-cluster-operator.yaml file, edit the data.log4j2.properties field: Example Cluster Operator logging configuration kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator labels: app: strimzi data: log4j2.properties: | name = COConfig monitorInterval = 30 appender.console.type = Console appender.console.name = STDOUT # ... Update the custom resource: oc apply -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml To change the frequency that logs are reloaded, set a time in seconds in the monitorInterval field (the default reload time is 30 seconds). Note As a result of this change, the STRIMZI_LOG_LEVEL environment variable has been removed from the 060-Deployment-strimzi-cluster-operator.yaml file. Set the log level in the ConfigMap instead. See Cluster Operator configuration . 1.4. OAuth 2.0 authorization Support for OAuth 2.0 authorization moves out of Technology Preview to a generally available component of AMQ Streams. If you are using OAuth 2.0 for token-based authentication, you can now also use OAuth 2.0 authorization rules to constrain client access to Kafka brokers. AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. See Using OAuth 2.0 token-based authorization . 1.5. Open Policy Agent (OPA) integration Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with AMQ Streams to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. See KafkaAuthorizationOpa schema reference Note Red Hat does not support the OPA server. 1.6. Debezium for change data capture integration Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries Debezium provides connectors (based on Kafka Connect) for the following common databases: MySQL PostgreSQL SQL Server MongoDB For more information on deploying Debezium with AMQ Streams, refer to the product documentation . 1.7. Service Registry You can use Service Registry as a centralized store of service schemas for data streaming. For Kafka, you can use Service Registry to store Apache Avro or JSON schema. Service Registry provides a REST API and a Java REST client to register and query the schemas from client applications through server-side endpoints. Using Service Registry decouples the process of managing schemas from the configuration of client applications. You enable an application to use a schema from the registry by specifying its URL in the client code. For example, the schemas to serialize and deserialize messages can be stored in the registry, which are then referenced from the applications that use them to ensure that the messages that they send and receive are compatible with those schemas. Kafka client applications can push or pull their schemas from Service Registry at runtime. See Managing schemas with Service Registry .
[ "install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator labels: app: strimzi data: log4j2.properties: | name = COConfig monitorInterval = 30 appender.console.type = Console appender.console.name = STDOUT #", "apply -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/features-str
Chapter 1. Introduction to RHEL system roles
Chapter 1. Introduction to RHEL system roles By using RHEL system roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL. Important terms and concepts The following describes important terms and concepts in an Ansible environment: Control node A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more information, see Preparing a control node on RHEL 8 . Managed node Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node . Ansible playbook In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible's configuration, deployment, and orchestration language. Inventory In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In the inventory, you can also organize the managed nodes by creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile. Available roles on a Red Hat Enterprise Linux 8 control node On a Red Hat Enterprise Linux 8 control node, the rhel-system-roles package provides the following roles: Role name Role description Chapter title certificate Certificate Issuance and Renewal Requesting certificates by using RHEL system roles cockpit Web console Installing and configuring web console with the cockpit RHEL system role crypto_policies System-wide cryptographic policies Setting a custom cryptographic policy across systems firewall Firewalld Configuring firewalld by using system roles ha_cluster HA Cluster Configuring a high-availability cluster by using system roles kdump Kernel Dumps Configuring kdump by using RHEL system roles kernel_settings Kernel Settings Using Ansible roles to permanently configure kernel parameters logging Logging Using the logging system role metrics Metrics (PCP) Monitoring performance by using RHEL system roles microsoft.sql.server Microsoft SQL Server Configuring Microsoft SQL Server by using the microsoft.sql.server Ansible role network Networking Using the network RHEL system role to manage InfiniBand connections nbde_client Network Bound Disk Encryption client Using the nbde_client and nbde_server system roles nbde_server Network Bound Disk Encryption server Using the nbde_client and nbde_server system roles postfix Postfix Variables of the postfix role in system roles postgresql PostgreSQL Installing and configuring PostgreSQL by using the postgresql RHEL system role selinux SELinux Configuring SELinux by using system roles ssh SSH client Configuring secure communication with the ssh system roles sshd SSH server Configuring secure communication with the ssh system roles storage Storage Managing local storage by using RHEL system roles tlog Terminal Session Recording Configuring a system for session recording by using the tlog RHEL system role timesync Time Synchronization Configuring time synchronization by using RHEL system roles vpn VPN Configuring VPN connections with IPsec by using the vpn RHEL system role Additional resources Red Hat Enterprise Linux (RHEL) system roles /usr/share/ansible/roles/rhel-system-roles. <role_name> /README.md file /usr/share/doc/rhel-system-roles/ <role_name> / directory
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/intro-to-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 9. Uninstalling Dev Spaces
Chapter 9. Uninstalling Dev Spaces Warning Uninstalling OpenShift Dev Spaces removes all OpenShift Dev Spaces-related user data! Use oc to uninstall the OpenShift Dev Spaces instance. Prerequisites dsc . See: Section 2.2, "Installing the dsc management tool" . Procedure Remove the OpenShift Dev Spaces instance: Tip The --delete-namespace option removes the OpenShift Dev Spaces namespace. The --delete-all option removes the Dev Workspace Operator and the related resources.
[ "dsc server:delete" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/administration_guide/uninstalling-devspaces
Chapter 8. Kernel
Chapter 8. Kernel KVM Hypervisor supports 240 vCPUs per virtual machine The KVM hypervisor has been improved to support 240 virtual CPUs (vCPUs) per KVM guest virtual machine. iwlwifi supports Intel(R) Wireless 7265/3165 (Stone Peak) wireless adapter The iwlwifi device driver now supports the Intel(R) Wireless 7265/3165 (Stone Peak) wireless adapter. Support for Wacom 22HD Touch tablets This update adds support for Wacom 22HD Touch tablets, which are now correctly recognized in Red Hat Enterprise Linux and thus functional. Improved page fault scalability for HugeTLB The updated Linux kernel has improved page fault scalability for HugeTLB. Previously only one HugeTLB page fault could be processed at a time because a single mutex was used. The improved method uses a table of mutexes, allowing for page faults to be processed in parallel. Calculation of the mutex table includes the number of page faults occurring and memory in use. kdump supports hugepage filtering To reduce both vmcore size and capture run time, kdump now treats hugepages as userpages and can filter them out. As hugepages are primarily used for application data, they are unlikely to be relevant in the event a vmcore analysis is required. Support for 802.1X EAP packet forwarding on bridges Bridge forwarding of 802.1x EAP packets is now supported, allowing for selective forwarding of some non-control link-local packets. This change also enables the use of 802.1X to authenticate a guest on a RHEL6 hypervisor using Linux bridge on a switch port. Rebase of the mtip32xx driver The Red Hat Enterprise Linux 6.7 kernel includes the most recent upstream version of the mtip32xx device driver. This version adds support for Micron SSD devices. turbostat supports 6th Generation Intel Core Processors The turbostat application now supports Intel's 6th Generation Intel Core Processors.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/kernel
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/providing-feedback
Chapter 5. Querying
Chapter 5. Querying Infinispan Query can execute Lucene queries and retrieve domain objects from a Red Hat JBoss Data Grid cache. Procedure 5.1. Prepare and Execute a Query Get SearchManager of an indexing enabled cache as follows: Create a QueryBuilder to build queries for Myth.class as follows: Create an Apache Lucene query that queries the Myth.class class' atributes as follows: Report a bug 5.1. Building Queries Query Module queries are built on Lucene queries, allowing users to use any Lucene query type. When the query is built, Infinispan Query uses org.infinispan.query.CacheQuery as the query manipulation API for further query processing. Report a bug 5.1.1. Building a Lucene Query Using the Lucene-based Query API With the Lucene API, use either the query parser (simple queries) or the Lucene programmatic API (complex queries). For details, see the online Lucene documentation or a copy of Lucene in Action or Hibernate Search in Action . Report a bug 5.1.2. Building a Lucene Query Using the Lucene programmatic API, it is possible to write full-text queries. However, when using Lucene programmatic API, the parameters must be converted to their string equivalent and must also apply the correct analyzer to the right field. A ngram analyzer for example uses several ngrams as the tokens for a given word and should be searched as such. It is recommended to use the QueryBuilder for this task. The Lucene-based query API is fluent. This API has a following key characteristics: Method names are in English. As a result, API operations can be read and understood as a series of English phrases and instructions. It uses IDE autocompletion which helps possible completions for the current input prefix and allows the user to choose the right option. It often uses the chaining method pattern. It is easy to use and read the API operations. To use the API, first create a query builder that is attached to a given indexed type. This QueryBuilder knows what analyzer to use and what field bridge to apply. Several QueryBuilder s (one for each type involved in the root of your query) can be created. The QueryBuilder is derived from the SearchFactory . The analyzer, used for a given field or fields can also be overridden. The query builder is now used to build Lucene queries. Report a bug 5.1.2.1. Keyword Queries The following example shows how to search for a specific word: Example 5.1. Keyword Search Table 5.1. Keyword query parameters Parameter Description keyword() Use this parameter to find a specific word onField() Use this parameter to specify in which lucene field to search the word matching() use this parameter to specify the match for search string createQuery() creates the Lucene query object The value "storm" is passed through the history FieldBridge . This is useful when numbers or dates are involved. The field bridge value is then passed to the analyzer used to index the field history . This ensures that the query uses the same term transformation than the indexing (lower case, ngram, stemming and so on). If the analyzing process generates several terms for a given word, a boolean query is used with the SHOULD logic (roughly an OR logic). To search a property that is not of type string. Note In plain Lucene, the Date object had to be converted to its string representation (in this case the year) This conversion works for any object, provided that the FieldBridge has an objectToString method (and all built-in FieldBridge implementations do). The example searches a field that uses ngram analyzers. The ngram analyzers index succession of ngrams of words, which helps to avoid user typos. For example, the 3-grams of the word hibernate are hib, ibe, ber, rna, nat, ate. Example 5.2. Searching Using Ngram Analyzers The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, phu, hus. Each of these ngram will be part of the query. The user is then able to find the Sysiphus myth (with a y ). All that is transparently done for the user. Note If the user does not want a specific field to use the field bridge or the analyzer then the ignoreAnalyzer() or ignoreFieldBridge() functions can be called. To search for multiple possible words in the same field, add them all in the matching clause. Example 5.3. Searching for Multiple Words To search the same word on multiple fields, use the onFields method. Example 5.4. Searching Multiple Fields In some cases, one field must be treated differently from another field even if searching the same term. In this case, use the andField() method. Example 5.5. Using the andField Method In the example, only field name is boosted to 5. Report a bug 5.1.2.2. Fuzzy Queries To execute a fuzzy query (based on the Levenshtein distance algorithm), start like a keyword query and add the fuzzy flag. Example 5.6. Fuzzy Query The threshold is the limit above which two terms are considering matching. It is a decimal between 0 and 1 and the default value is 0.5. The prefixLength is the length of the prefix ignored by the "fuzzyness". While the default value is 0, a non zero value is recommended for indexes containing a huge amount of distinct terms. Report a bug 5.1.2.3. Wildcard Queries Wildcard queries can also be executed (queries where some of parts of the word are unknown). The ? represents a single character and * represents any character sequence. Note that for performance purposes, it is recommended that the query does not start with either ? or * . Example 5.7. Wildcard Query Note Wildcard queries do not apply the analyzer on the matching terms. Otherwise the risk of * or ? being mangled is too high. Report a bug 5.1.2.4. Phrase Queries So far we have been looking for words or sets of words, the user can also search exact or approximate sentences. Use phrase() to do so. Example 5.8. Phrase Query Approximate sentences can be searched by adding a slop factor. The slop factor represents the number of other words permitted in the sentence: this works like a within or near operator. Example 5.9. Adding Slop Factor Report a bug 5.1.2.5. Range Queries A range query searches for a value in between given boundaries (included or not) or for a value below or above a given boundary (included or not). Example 5.10. Range Query Report a bug 5.1.2.6. Combining Queries Queries can be aggregated (combine) to create more complex queries. The following aggregation operators are available: SHOULD : the query should contain the matching elements of the subquery. MUST : the query must contain the matching elements of the subquery. MUST NOT : the query must not contain the matching elements of the subquery. The subqueries can be any Lucene query including a boolean query itself. Following are some examples: Example 5.11. Combining Subqueries Report a bug 5.1.2.7. Query Options The following is a summary of query options for query types and fields: boostedTo (on query type and on field) boosts the query or field to a provided factor. withConstantScore (on query) returns all results that match the query and have a constant score equal to the boost. filteredBy(Filter) (on query) filters query results using the Filter instance. ignoreAnalyzer (on field) ignores the analyzer when processing this field. ignoreFieldBridge (on field) ignores the field bridge when processing this field. The following example illustrates how to use these options: Example 5.12. Querying Options 23151%2C+Infinispan+Query+Guide-6.608-09-2016+09%3A23%3A32JBoss+Data+Grid+6Documentation6.6.1 Report a bug 5.1.3. Build a Query with Infinispan Query 5.1.3.1. Generality After building the Lucene query, wrap it within a Infinispan CacheQuery. The query searches all indexed entities and returns all types of indexed classes unless explicitly configured not to do so. Example 5.13. Wrapping a Lucene Query in an Infinispan CacheQuery For improved performance, restrict the returned types as follows: Example 5.14. Filtering the Search Result by Entity Type The first part of the second example only returns the matching Customer instances. The second part of the same example returns matching Actor and Item instances. The type restriction is polymorphic. As a result, if the two subclasses Salesman and Customer of the base class Person return, specify Person.class to filter based on result types. Report a bug 5.1.3.2. Pagination To avoid performance degradation, it is recommended to restrict the number of returned objects per query. A user navigating from one page to another page is a very common use case. The way to define pagination is similar to defining pagination in a plain HQL or Criteria query. Example 5.15. Defining pagination for a search query Note The total number of matching elements, despite the pagination, is accessible via cacheQuery.getResultSize() . Report a bug 5.1.3.3. Sorting Apache Lucene contains a flexible and powerful result sorting mechanism. The default sorting is by relevance and is appropriate for a large variety of use cases. The sorting mechanism can be changed to sort by other properties using the Lucene Sort object to apply a Lucene sorting strategy. Example 5.16. Specifying a Lucene Sort Note Fields used for sorting must not be tokenized. For more information about tokenizing, see Section 4.1.2, "@Field" . Report a bug 5.1.3.4. Projection In some cases, only a small subset of the properties is required. Use Infinispan Query to return a subset of properties as follows: Example 5.17. Using Projection Instead of Returning the Full Domain Object The Query Module extracts properties from the Lucene index and converts them to their object representation and returns a list of Object[] . Projections prevent a time consuming database round-trip. However, they have following constraints: The properties projected must be stored in the index ( @Field(store=Store.YES) ), which increases the index size. The properties projected must use a FieldBridge implementing org.infinispan.query.bridge.TwoWayFieldBridge or org.infinispan.query.bridge.TwoWayStringBridge , the latter being the simpler version. Note All Lucene-based Query API built-in types are two-way. Only the simple properties of the indexed entity or its embedded associations can be projected. Therefore a whole embedded entity cannot be projected. Projection does not work on collections or maps which are indexed via @IndexedEmbedded Lucene provides metadata information about query results. Use projection constants to retrieve the metadata. Example 5.18. Using Projection to Retrieve Metadata Fields can be mixed with the following projection constants: FullTextQuery.THIS returns the initialized and managed entity as a non-projected query does. FullTextQuery.DOCUMENT returns the Lucene Document related to the projected object. FullTextQuery.OBJECT_CLASS returns the indexed entity's class. FullTextQuery.SCORE returns the document score in the query. Use scores to compare one result against another for a given query. However, scores are not relevant to compare the results of two different queries. FullTextQuery.ID is the ID property value of the projected object. FullTextQuery.DOCUMENT_ID is the Lucene document ID. The Lucene document ID changes between two IndexReader openings. FullTextQuery.EXPLANATION returns the Lucene Explanation object for the matching object/document in the query. This is not suitable for retrieving large amounts of data. Running FullTextQuery.EXPLANATION is as expensive as running a Lucene query for each matching element. As a result, projection is recommended. Report a bug 5.1.3.5. Limiting the Time of a Query Limit the time a query takes in Infinispan Query as follows: Raise an exception when arriving at the limit. Limit to the number of results retrieved when the time limit is raised. Report a bug 5.1.3.6. Raise an Exception on Time Limit If a query uses more than the defined amount of time, a custom exception might be defined to be thrown. To define the limit when using the CacheQuery API, use the following approach: Example 5.19. Defining a Timeout in Query Execution The getResultSize() , iterate() and scroll() honor the timeout until the end of the method call. As a result, Iterable or the ScrollableResults ignore the timeout. Additionally, explain() does not honor this timeout period. This method is used for debugging and to check the reasons for slow performance of a query. Important The example code does not guarantee that the query stops at the specified results amount. Report a bug
[ "SearchManager manager = Search.getSearchManager(cache);", "final org.hibernate.search.query.dsl.QueryBuilder queryBuilder = manager.buildQueryBuilderForClass(Myth.class).get();", "org.apache.lucene.search.Query query = queryBuilder.keyword() .onField(\"history\").boostedTo(3) .matching(\"storm\") .createQuery(); // wrap Lucene query in a org.infinispan.query.CacheQuery CacheQuery cacheQuery = manager.getQuery(query); // Get query result List<Object> result = cacheQuery.list();", "Search.getSearchManager(cache).buildQueryBuilderForClass(Myth.class).get();", "SearchFactory searchFactory = Search.getSearchManager(cache).getSearchFactory(); QueryBuilder mythQB = searchFactory.buildQueryBuilder() .forEntity(Myth.class) .overridesForField(\"history\",\"stem_analyzer_definition\") .get();", "Query luceneQuery = mythQB.keyword().onField(\"history\").matching(\"storm\").createQuery();", "@Indexed public class Myth { @Field(analyze = Analyze.NO) @DateBridge(resolution = Resolution.YEAR) public Date getCreationDate() { return creationDate; } public Date setCreationDate(Date creationDate) { this.creationDate = creationDate; } private Date creationDate; } Date birthdate = ...; Query luceneQuery = mythQb.keyword() .onField(\"creationDate\") .matching(birthdate) .createQuery();", "@AnalyzerDef(name = \"ngram\", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = StandardFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = StopFilterFactory.class), @TokenFilterDef(factory = NGramFilterFactory.class, params = { @Parameter(name = \"minGramSize\", value = \"3\"), @Parameter(name = \"maxGramSize\", value = \"3\")}) }) public class Myth { @Field(analyzer = @Analyzer(definition = \"ngram\")) public String getName() { return name; } public String setName(String name) { this.name = name; } private String name; } Date birthdate = ...; Query luceneQuery = mythQb.keyword() .onField(\"name\") .matching(\"Sisiphus\") .createQuery();", "//search document with storm or lightning in their history Query luceneQuery = mythQB.keyword().onField(\"history\").matching(\"storm lightning\").createQuery();", "Query luceneQuery = mythQB .keyword() .onFields(\"history\",\"description\",\"name\") .matching(\"storm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .onField(\"history\") .andField(\"name\") .boostedTo(5) .andField(\"description\") .matching(\"storm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .fuzzy() .withThreshold(.8f) .withPrefixLength(1) .onField(\"history\") .matching(\"starm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .wildcard() .onField(\"history\") .matching(\"sto*\") .createQuery();", "Query luceneQuery = mythQB.phrase() .onField(\"history\") .sentence(\"Thou shalt not kill\") .createQuery();", "Query luceneQuery = mythQB.phrase() .withSlop(3) .onField(\"history\") .sentence(\"Thou kill\") .createQuery();", "//look for 0 <= starred < 3 Query luceneQuery = mythQB.range() .onField(\"starred\") .from(0).to(3).excludeLimit() .createQuery(); //look for myths strictly BC Date beforeChrist = ...; Query luceneQuery = mythQB.range() .onField(\"creationDate\") .below(beforeChrist).excludeLimit() .createQuery();", "//look for popular modern myths that are not urban Date twentiethCentury = ...; Query luceneQuery = mythQB.bool() .must(mythQB.keyword().onField(\"description\").matching(\"urban\").createQuery()) .not() .must(mythQB.range().onField(\"starred\").above(4).createQuery()) .must(mythQB.range() .onField(\"creationDate\") .above(twentiethCentury) .createQuery()) .createQuery(); //look for popular myths that are preferably urban Query luceneQuery = mythQB .bool() .should(mythQB.keyword() .onField(\"description\") .matching(\"urban\") .createQuery()) .must(mythQB.range().onField(\"starred\").above(4).createQuery()) .createQuery(); //look for all myths except religious ones Query luceneQuery = mythQB.all() .except(mythQb.keyword() .onField(\"description_stem\") .matching(\"religion\") .createQuery()) .createQuery();", "Query luceneQuery = mythQB .bool() .should(mythQB.keyword().onField(\"description\").matching(\"urban\").createQuery()) .should(mythQB .keyword() .onField(\"name\") .boostedTo(3) .ignoreAnalyzer() .matching(\"urban\").createQuery()) .must(mythQB .range() .boostedTo(5) .withConstantScore() .onField(\"starred\") .above(4).createQuery()) .createQuery();", "CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery);", "CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Customer.class); // or CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Item.class, Actor.class);", "CacheQuery cacheQuery = Search.getSearchManager(cache) .getQuery(luceneQuery, Customer.class); cacheQuery.firstResult(15); //start from the 15th element cacheQuery.maxResults(10); //return 10 elements", "org.infinispan.query.CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Book.class); org.apache.lucene.search.Sort sort = new Sort( new SortField(\"title\", SortField.STRING)); cacheQuery.sort(sort); List results = cacheQuery.list();", "SearchManager searchManager = Search.getSearchManager(cache); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); cacheQuery.projection(\"id\", \"summary\", \"body\", \"mainAuthor.name\"); List results = cacheQuery.list(); Object[] firstResult = (Object[]) results.get(0); Integer id = (Integer) firstResult[0]; String summary = (String) firstResult[1]; String body = (String) firstResult[2]; String authorName = (String) firstResult[3];", "SearchManager searchManager = Search.getSearchManager(cache); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); cacheQuery.projection(\"mainAuthor.name\"); List results = cacheQuery.list(); Object[] firstResult = (Object[]) results.get(0); float score = (Float) firstResult[0]; Book book = (Book) firstResult[1]; String authorName = (String) firstResult[2];", "SearchManagerImplementor searchManager = (SearchManagerImplementor) Search.getSearchManager(cache); searchManager.setTimeoutExceptionFactory(new MyTimeoutExceptionFactory()); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); //define the timeout in seconds cacheQuery.timeout(2, TimeUnit.SECONDS) try { query.list(); } catch (MyTimeoutException e) { //do something, too slow } private static class MyTimeoutExceptionFactory implements TimeoutExceptionFactory { @Override public RuntimeException createTimeoutException(String message, Query query) { return new MyTimeoutException(); } } public static class MyTimeoutException extends RuntimeException { }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/chap-Querying
Chapter 4. Managing Users and Groups
Chapter 4. Managing Users and Groups The control of users and groups is a core element of Red Hat Enterprise Linux system administration. This chapter explains how to add, manage, and delete users and groups in the graphical user interface and on the command line, and covers advanced topics, such as creating group directories. 4.1. Introduction to Users and Groups While users can be either people (meaning accounts tied to physical users) or accounts that exist for specific applications to use, groups are logical expressions of organization, tying users together for a common purpose. Users within a group share the same permissions to read, write, or execute files owned by that group. Each user is associated with a unique numerical identification number called a user ID ( UID ). Likewise, each group is associated with a group ID ( GID ). A user who creates a file is also the owner and group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and everyone else. The file owner can be changed only by root , and access permissions can be changed by both the root user and file owner. Additionally, Red Hat Enterprise Linux supports access control lists ( ACLs ) for files and directories which allow permissions for specific users outside of the owner to be set. For more information about this feature, see Chapter 5, Access Control Lists . Reserved User and Group IDs Red Hat Enterprise Linux reserves user and group IDs below 1000 for system users and groups. By default, the User Manager does not display the system users. Reserved user and group IDs are documented in the setup package. To view the documentation, use this command: The recommended practice is to assign IDs starting at 5,000 that were not already reserved, as the reserved range can increase in the future. To make the IDs assigned to new users by default start at 5,000, change the UID_MIN and GID_MIN directives in the /etc/login.defs file: Note For users created before you changed UID_MIN and GID_MIN directives, UIDs will still start at the default 1000. Even with new user and group IDs beginning with 5,000, it is recommended not to raise IDs reserved by the system above 1000 to avoid conflict with systems that retain the 1000 limit. 4.1.1. User Private Groups Red Hat Enterprise Linux uses a user private group ( UPG ) scheme, which makes UNIX groups easier to manage. A user private group is created whenever a new user is added to the system. It has the same name as the user for which it was created and that user is the only member of the user private group. User private groups make it safe to set default permissions for a newly created file or directory, allowing both the user and the group of that user to make modifications to the file or directory. The setting which determines what permissions are applied to a newly created file or directory is called a umask and is configured in the /etc/bashrc file. Traditionally on UNIX-based systems, the umask is set to 022 , which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator's group , are not allowed to make any modifications. However, under the UPG scheme, this "group protection" is not necessary since every user has their own private group. See Section 4.3.5, "Setting Default Permissions for New Files Using umask " for more information. A list of all groups is stored in the /etc/group configuration file. 4.1.2. Shadow Passwords In environments with multiple users, it is very important to use shadow passwords provided by the shadow-utils package to enhance the security of system authentication files. For this reason, the installation program enables shadow passwords by default. The following is a list of the advantages shadow passwords have over the traditional way of storing passwords on UNIX-based systems: Shadow passwords improve system security by moving encrypted password hashes from the world-readable /etc/passwd file to /etc/shadow , which is readable only by the root user. Shadow passwords store information about password aging. Shadow passwords allow to enforce some of the security policies set in the /etc/login.defs file. Most utilities provided by the shadow-utils package work properly whether or not shadow passwords are enabled. However, since password aging information is stored exclusively in the /etc/shadow file, some utilities and commands do not work without first enabling shadow passwords: The chage utility for setting password aging parameters. For details, see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide . The gpasswd utility for administrating the /etc/group file. The usermod command with the -e, --expiredate or -f, --inactive option. The useradd command with the -e, --expiredate or -f, --inactive option. 4.2. Managing Users in a Graphical Environment The Users utility allows you to view, modify, add, and delete local users in the graphical user interface. 4.2.1. Using the Users Settings Tool Press the Super key to enter the Activities Overview, type Users and then press Enter . The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Space bar. Alternatively, you can open the Users utility from the Settings menu after clicking your user name in the top right corner of the screen. To make changes to the user accounts, first select the Unlock button and authenticate yourself as indicated by the dialog box that appears. Note that unless you have superuser privileges, the application will prompt you to authenticate as root . To add and remove users, select the + and - button respectively. To add a user to the administrative group wheel , change the Account Type from Standard to Administrator . To edit a user's language setting, select the language and a drop-down menu appears. Figure 4.1. The Users Settings Tool When a new user is created, the account is disabled until a password is set. The Password drop-down menu, shown in Figure 4.2, "The Password Menu" , contains the options to set a password by the administrator immediately, choose a password by the user at the first login, or create a guest account with no password required to log in. You can also disable or enable an account from this menu. Figure 4.2. The Password Menu 4.3. Using Command-Line Tools Apart from the Users settings tool described in Section 4.2, "Managing Users in a Graphical Environment" , which is designed for basic managing of users, you can use command line tools for managing users and groups that are listed in Table 4.1, "Command line utilities for managing users and groups" . Table 4.1. Command line utilities for managing users and groups Utilities Description id Displays user and group IDs. useradd , usermod , userdel Standard utilities for adding, modifying, and deleting user accounts. groupadd , groupmod , groupdel Standard utilities for adding, modifying, and deleting groups. gpasswd Utility primarily used for modification of group password in the /etc/gshadow file which is used by the newgrp command. pwck , grpck Utilities that can be used for verification of the password, group, and associated shadow files. pwconv , pwunconv Utilities that can be used for the conversion of passwords to shadow passwords, or back from shadow passwords to standard passwords. grpconv , grpunconv Similar to the , these utilities can be used for conversion of shadowed information for group accounts. 4.3.1. Adding a New User To add a new user to the system, type the following at a shell prompt as root : ...where options are command-line options as described in Table 4.2, "Common useradd command-line options" . By default, the useradd command creates a locked user account. To unlock the account, run the following command as root to assign a password: Optionally, you can set a password aging policy. See the Password Security section in the Red Hat Enterprise Linux 7 Security Guide . Table 4.2. Common useradd command-line options Option -c ' comment ' comment can be replaced with any string. This option is generally used to specify the full name of a user. -d home_directory Home directory to be used instead of default /home/ username / . -e date Date for the account to be disabled in the format YYYY-MM-DD. -f days Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not disabled after the password expires. -g group_name Group name or group number for the user's default (primary) group. The group must exist prior to being specified here. -G group_list List of additional (supplementary, other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here. -m Create the home directory if it does not exist. -M Do not create the home directory. -N Do not create a user private group for the user. -p password The password encrypted with crypt . -r Create a system account with a UID less than 1000 and without a home directory. -s User's login shell, which defaults to /bin/bash . -u uid User ID for the user, which must be unique and greater than 999. Important The default range of IDs for system and normal users has been changed in Red Hat Enterprise Linux 7 from earlier releases. Previously, UID 1-499 was used for system users and values above for normal users. The default range for system users is now 1-999. This change might cause problems when migrating to Red Hat Enterprise Linux 7 with existing users having UIDs and GIDs between 500 and 999. The default ranges of UID and GID can be changed in the /etc/login.defs file. Explaining the Process The following steps illustrate what happens if the command useradd juan is issued on a system that has shadow passwords enabled: A new line for juan is created in /etc/passwd : The line has the following characteristics: It begins with the user name juan . There is an x for the password field indicating that the system is using shadow passwords. A UID greater than 999 is created. Under Red Hat Enterprise Linux 7, UIDs below 1000 are reserved for system use and should not be assigned to users. A GID greater than 999 is created. Under Red Hat Enterprise Linux 7, GIDs below 1000 are reserved for system use and should not be assigned to users. The optional GECOS information is left blank. The GECOS field can be used to provide additional information about the user, such as their full name or phone number. The home directory for juan is set to /home/juan/ . The default shell is set to /bin/bash . A new line for juan is created in /etc/shadow : The line has the following characteristics: It begins with the user name juan . Two exclamation marks ( !! ) appear in the password field of the /etc/shadow file, which locks the account. Note If an encrypted password is passed using the -p flag, it is placed in the /etc/shadow file on the new line for the user. The password is set to never expire. A new line for a group named juan is created in /etc/group : A group with the same name as a user is called a user private group . For more information on user private groups, see Section 4.1.1, "User Private Groups" . The line created in /etc/group has the following characteristics: It begins with the group name juan . An x appears in the password field indicating that the system is using shadow group passwords. The GID matches the one listed for juan 's primary group in /etc/passwd . A new line for a group named juan is created in /etc/gshadow : The line has the following characteristics: It begins with the group name juan . An exclamation mark ( ! ) appears in the password field of the /etc/gshadow file, which locks the group. All other fields are blank. A directory for user juan is created in the /home directory: This directory is owned by user juan and group juan . It has read , write , and execute privileges only for the user juan . All other permissions are denied. The files within the /etc/skel/ directory (which contain default user settings) are copied into the new /home/juan/ directory: At this point, a locked account called juan exists on the system. To activate it, the administrator must assign a password to the account using the passwd command and, optionally, set password aging guidelines (see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide for details). 4.3.2. Adding a New Group To add a new group to the system, type the following at a shell prompt as root : ...where options are command-line options as described in Table 4.3, "Common groupadd command-line options" . Table 4.3. Common groupadd command-line options Option Description -f , --force When used with -g gid and gid already exists, groupadd will choose another unique gid for the group. -g gid Group ID for the group, which must be unique and greater than 999. -K , --key key = value Override /etc/login.defs defaults. -o , --non-unique Allows creating groups with duplicate GID. -p , --password password Use this encrypted password for the new group. -r Create a system group with a GID less than 1000. 4.3.3. Adding an Existing User to an Existing Group Use the usermod utility to add an already existing user to an already existing group. Various options of usermod have different impact on user's primary group and on his or her supplementary groups. To override user's primary group, run the following command as root : To override user's supplementary groups, run the following command as root : Note that in this case all supplementary groups of the user are replaced by the new group or several new groups. To add one or more groups to user's supplementary groups, run one of the following commands as root : Note that in this case the new group is added to user's current supplementary groups. 4.3.4. Creating Group Directories System administrators usually like to create a group for each major project and assign people to the group when they need to access that project's files. With this traditional scheme, file management is difficult; when someone creates a file, it is associated with the primary group to which they belong. When a single person works on multiple projects, it becomes difficult to associate the right files with the right group. However, with the UPG scheme, groups are automatically assigned to files created within a directory with the setgid bit set. The setgid bit makes managing group projects that share a common directory very simple because any files a user creates within the directory are owned by the group that owns the directory. For example, a group of people need to work on files in the /opt/myproject/ directory. Some people are trusted to modify the contents of this directory, but not everyone. As root , create the /opt/myproject/ directory by typing the following at a shell prompt: Add the myproject group to the system: Associate the contents of the /opt/myproject/ directory with the myproject group: Allow users in the group to create files within the directory and set the setgid bit: At this point, all members of the myproject group can create and edit files in the /opt/myproject/ directory without the administrator having to change file permissions every time users write new files. To verify that the permissions have been set correctly, run the following command: Add users to the myproject group: 4.3.5. Setting Default Permissions for New Files Using umask When a process creates a file, the file has certain default permissions, for example, -rw-rw-r-- . These initial permissions are partially defined by the file mode creation mask , also called file permission mask or umask . Every process has its own umask, for example, bash has umask 0022 by default. Process umask can be changed. What umask consists of A umask consists of bits corresponding to standard file permissions. For example, for umask 0137 , the digits mean that: 0 = no meaning, it is always 0 (umask does not affect special bits) 1 = for owner permissions, the execute bit is set 3 = for group permissions, the execute and write bits are set 7 = for others permissions, the execute, write, and read bits are set Umasks can be represented in binary, octal, or symbolic notation. For example, the octal representation 0137 equals symbolic representation u=rw-,g=r--,o=--- . Symbolic notation specification is the reverse of the octal notation specification: it shows the allowed permissions, not the prohibited permissions. How umask works Umask prohibits permissions from being set for a file: When a bit is set in umask , it is unset in the file. When a bit is not set in umask , it can be set in the file, depending on other factors. The following figure shows how umask 0137 affects creating a new file. Figure 4.3. Applying umask when creating a file Important For security reasons, a regular file cannot have execute permissions by default. Therefore, even if umask is 0000 , which does not prohibit any permissions, a new regular file still does not have execute permissions. However, directories can be created with execute permissions: 4.3.5.1. Managing umask in Shells For popular shells, such as bash , ksh , zsh and tcsh , umask is managed using the umask shell builtin . Processes started from shell inherit its umask. Displaying the current mask To show the current umask in octal notation: To show the current umask in symbolic notation: Setting mask in shell using umask To set umask for the current shell session using octal notation run: Substitute octal_mask with four or less digits from 0 to 7 . When three or less digits are provided, permissions are set as if the command contained leading zeros. For example, umask 7 translates to 0007 . Example 4.1. Setting umask Using Octal Notation To prohibit new files from having write and execute permissions for owner and group, and from having any permissions for others: Or simply: To set umask for the current shell session using symbolic notation: Example 4.2. Setting umask Using Symbolic Notation To set umask 0337 using symbolic notation: Working with the default shell umask Shells usually have a configuration file where their default umask is set. For bash , it is /etc/bashrc . To show the default bash umask: The output shows if umask is set, either using the umask command or the UMASK variable. In the following example, umask is set to 022 using the umask command: To change the default umask for bash , change the umask command call or the UMASK variable assignment in /etc/bashrc . This example changes the default umask to 0227 : Working with the default shell umask of a specific user By default, bash umask of a new user defaults to the one defined in /etc/bashrc . To change bash umask for a particular user, add a call to the umask command in USDHOME/.bashrc file of that user. For example, to change bash umask of user john to 0227 : Setting default permissions for newly created home directories To change permissions with which user home directories are created, change the UMASK variable in the /etc/login.defs file: 4.4. Additional Resources For more information on how to manage users and groups on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation For information about various utilities for managing users and groups, see the following manual pages: useradd (8) - The manual page for the useradd command documents how to use it to create new users. userdel (8) - The manual page for the userdel command documents how to use it to delete users. usermod (8) - The manual page for the usermod command documents how to use it to modify users. groupadd (8) - The manual page for the groupadd command documents how to use it to create new groups. groupdel (8) - The manual page for the groupdel command documents how to use it to delete groups. groupmod (8) - The manual page for the groupmod command documents how to use it to modify group membership. gpasswd (1) - The manual page for the gpasswd command documents how to manage the /etc/group file. grpck (8) - The manual page for the grpck command documents how to use it to verify the integrity of the /etc/group file. pwck (8) - The manual page for the pwck command documents how to use it to verify the integrity of the /etc/passwd and /etc/shadow files. pwconv (8) - The manual page for the pwconv , pwunconv , grpconv , and grpunconv commands documents how to convert shadowed information for passwords and groups. id (1) - The manual page for the id command documents how to display user and group IDs. umask (2) - The manual page for the umask command documents how to work with the file mode creation mask. For information about related configuration files, see: group (5) - The manual page for the /etc/group file documents how to use this file to define system groups. passwd (5) - The manual page for the /etc/passwd file documents how to use this file to define user information. shadow (5) - The manual page for the /etc/shadow file documents how to use this file to set passwords and account expiration information for the system. Online Documentation Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 provides additional information how to ensure password security and secure the workstation by enabling password aging and user account locking. See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands.
[ "cat /usr/share/doc/setup*/uidgid", "[file contents truncated] UID_MIN 5000 [file contents truncated] GID_MIN 5000 [file contents truncated]", "useradd options username", "passwd username", "juan:x:1001:1001::/home/juan:/bin/bash", "juan:!!:14798:0:99999:7:::", "juan:x:1001:", "juan:!::", "~]# ls -ld /home/juan drwx------. 4 juan juan 4096 Mar 3 18:23 /home/juan", "~]# ls -la /home/juan total 28 drwx------. 4 juan juan 4096 Mar 3 18:23 . drwxr-xr-x. 5 root root 4096 Mar 3 18:23 .. -rw-r--r--. 1 juan juan 18 Jun 22 2010 .bash_logout -rw-r--r--. 1 juan juan 176 Jun 22 2010 .bash_profile -rw-r--r--. 1 juan juan 124 Jun 22 2010 .bashrc drwxr-xr-x. 4 juan juan 4096 Nov 23 15:09 .mozilla", "groupadd options group_name", "~]# usermod -g group_name user_name", "~]# usermod -G group_name1 , group_name2 ,... user_name", "~]# usermod -aG group_name1 , group_name2 ,... user_name", "~]# usermod --append -G group_name1 , group_name2 ,... user_name", "mkdir /opt/myproject", "groupadd myproject", "chown root:myproject /opt/myproject", "chmod 2775 /opt/myproject", "~]# ls -ld /opt/myproject drwxrwsr-x. 3 root myproject 4096 Mar 3 18:31 /opt/myproject", "usermod -aG myproject username", "[john@server tmp]USD umask 0000 [john@server tmp]USD touch file [john@server tmp]USD mkdir directory [john@server tmp]USD ls -lh . total 0 drwxrwxrwx. 2 john john 40 Nov 2 13:17 directory -rw-rw-rw-. 1 john john 0 Nov 2 13:17 file", "~]USD umask 0022", "~]USD umask -S u=rwx,g=rx,o=rx", "~]USD umask octal_mask", "~]USD umask 0337", "~]USD umask 337", "~]USD umask -S symbolic_mask", "~]USD umask -S u=r,g=r,o=", "~]USD grep -i -B 1 umask /etc/bashrc", "~]USD grep -i -B 1 umask /etc/bashrc # By default, we want umask to get set. This sets it for non-login shell. -- if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 022", "if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 227", "john@server ~]USD echo 'umask 227' >> /home/john/.bashrc", "The permission mask is initialized to this value. If not specified, the permission mask will be initialized to 022. UMASK 077" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-managing_users_and_groups
Chapter 12. Build configuration resources
Chapter 12. Build configuration resources Use the following procedure to configure build settings. 12.1. Build controller configuration parameters The build.config.openshift.io/cluster resource offers the following configuration parameters. Parameter Description Build Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . spec : Holds user-settable values for the build controller configuration. buildDefaults Controls the default information for builds. defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. You can override values by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the BuildConfig strategy. gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any proxy settings for all Git commands, such as git clone . Values that are not set here are inherited from DefaultProxy. env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . resources : Defines resource requirements to execute the build. ImageLabel name : Defines the name of the label. It must have non-zero length. buildOverrides Controls override settings for builds. imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. nodeSelector : A selector which must be true for the build pod to fit on a node. tolerations : A list of tolerations that overrides any existing tolerations set on a build pod. BuildList items : Standard object's metadata. 12.2. Configuring build settings You can configure build settings by editing the build.config.openshift.io/cluster resource. Procedure Edit the build.config.openshift.io/cluster resource: USD oc edit build.config.openshift.io/cluster The following is an example build.config.openshift.io/cluster resource: apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 2 name: cluster resourceVersion: "107233" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists 1 Build : Holds cluster-wide information on how to handle builds. The canonical, and only valid name is cluster . 2 buildDefaults : Controls the default information for builds. 3 defaultProxy : Contains the default proxy settings for all build operations, including image pull or push and source download. 4 env : A set of default environment variables that are applied to the build if the specified variables do not exist on the build. 5 gitProxy : Contains the proxy settings for Git operations only. If set, this overrides any Proxy settings for all Git commands, such as git clone . 6 imageLabels : A list of labels that are applied to the resulting image. You can override a default label by providing a label with the same name in the BuildConfig . 7 resources : Defines resource requirements to execute the build. 8 buildOverrides : Controls override settings for builds. 9 imageLabels : A list of labels that are applied to the resulting image. If you provided a label in the BuildConfig with the same name as one in this table, your label will be overwritten. 10 nodeSelector : A selector which must be true for the build pod to fit on a node. 11 tolerations : A list of tolerations that overrides any existing tolerations set on a build pod.
[ "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/build-configuration
Chapter 8. Granting and Restricting Access to SSSD Containers Using HBAC Rules
Chapter 8. Granting and Restricting Access to SSSD Containers Using HBAC Rules For the Identity Management domain, each SSSD container represents itself as a different host, and administrators can set up host-based access control (HBAC) rules to allow or restrict access to individual containers separately. For details about configuring HBAC rules in Identity Management, see Configuring Host-Based Access Control in the Linux Domain Identity, Authentication, and Policy Guide
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/using_containerized_identity_management_services/sssd-with-different-configs-granting-and-restricting-access-to-sssd-containers-using-hbac-rules
Chapter 27. Configuring CPU Affinity and NUMA policies using systemd
Chapter 27. Configuring CPU Affinity and NUMA policies using systemd The CPU management, memory management, and I/O bandwidth options deal with partitioning available resources. 27.1. Configuring CPU affinity using systemd CPU affinity settings help you restrict the access of a particular process to some CPUs. Effectively, the CPU scheduler never schedules the process to run on the CPU that is not in the affinity mask of the process. The default CPU affinity mask applies to all services managed by systemd. To configure CPU affinity mask for a particular systemd service, systemd provides CPUAffinity= both as a unit file option and a manager configuration option in the /etc/systemd/system.conf file. The CPUAffinity= unit file option sets a list of CPUs or CPU ranges that are merged and used as the affinity mask. The CPUAffinity option in the /etc/systemd/system.conf file defines an affinity mask for the process identification number (PID) 1 and all processes forked off of PID1. You can then override the CPUAffinity on a per-service basis. Note After configuring CPU affinity mask for a particular systemd service, you must restart the system to apply the changes. Procedure To set CPU affinity mask for a particular systemd service using the CPUAffinity unit file option: Check the values of the CPUAffinity unit file option in the service of your choice: As a root, set the required value of the CPUAffinity unit file option for the CPU ranges used as the affinity mask: Restart the service to apply the changes. To set CPU affinity mask for a particular systemd service using the manager configuration option: Edit the /etc/systemd/system.conf file: Search for the CPUAffinity= option and set the CPU numbers Save the edited file and restart the server to apply the changes. 27.2. Configuring NUMA policies using systemd Non-uniform memory access (NUMA) is a computer memory subsystem design, in which the memory access time depends on the physical memory location relative to the processor. Memory close to the CPU has lower latency (local memory) than memory that is local for a different CPU (foreign memory) or is shared between a set of CPUs. In terms of the Linux kernel, NUMA policy governs where (for example, on which NUMA nodes) the kernel allocates physical memory pages for the process. systemd provides unit file options NUMAPolicy and NUMAMask to control memory allocation policies for services. Procedure To set the NUMA memory policy through the NUMAPolicy unit file option: Check the values of the NUMAPolicy unit file option in the service of your choice: As a root, set the required policy type of the NUMAPolicy unit file option: Restart the service to apply the changes. To set a global NUMAPolicy setting using the [Manager] configuration option: Search in the /etc/systemd/system.conf file for the NUMAPolicy option in the [Manager] section of the file. Edit the policy type and save the file. Reload the systemd configuration: Reboot the server. Important When you configure a strict NUMA policy, for example bind , make sure that you also appropriately set the CPUAffinity= unit file option. Additional resources NUMA policy configuration options for systemd The systemd.resource-control(5) , systemd.exec(5) , and set_mempolicy(2) man pages. 27.3. NUMA policy configuration options for systemd Systemd provides the following options to configure the NUMA policy: NUMAPolicy Controls the NUMA memory policy of the executed processes. You can use these policy types: default preferred bind interleave local NUMAMask Controls the NUMA node list that is associated with the selected NUMA policy. Note that you do not have to specify the NUMAMask option for the following policies: default local For the preferred policy, the list specifies only a single NUMA node. Additional resources systemd.resource-control(5) , systemd.exec(5) , and set_mempolicy(2) man pages on your system NUMA policy configuration options for systemd
[ "systemctl show --property <CPU affinity configuration option> <service name>", "systemctl set-property <service name> CPUAffinity=<value>", "systemctl restart <service name>", "vi /etc/systemd/system.conf", "systemctl show --property <NUMA policy configuration option> <service name>", "systemctl set-property <service name> NUMAPolicy= <value>", "systemctl restart <service name>", "systemd daemon-reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/assembly_configuring-cpu-affinity-and-numa-policies-using-systemd_managing-monitoring-and-updating-the-kernel
6.15. Propagating the Configuration File to the Cluster Nodes
6.15. Propagating the Configuration File to the Cluster Nodes After you have created or edited a cluster configuration file on one of the nodes in the cluster, you need to propagate that same file to all of the cluster nodes and activate the configuration. Use the following command to propagate and activate a cluster configuration file. When you use the --activate option, you must also specify the --sync option for the activation to take affect. To verify that all of the nodes specified in the hosts cluster configuration file have the identical cluster configuration file, execute the following command: If you have created or edited a configuration file on a local node, use the following command to send that file to one of the nodes in the cluster: To verify that all of the nodes specified in the local file have the identical cluster configuration file, execute the following command:
[ "ccs -h host --sync --activate", "ccs -h host --checkconf", "ccs -f file -h host --setconf", "ccs -f file --checkconf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clusterconf-propagate-ccs-ca
Red Hat Developer Hub support
Red Hat Developer Hub support If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal . You can use the Red Hat Customer Portal for the following purposes: To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/getting_started_with_red_hat_developer_hub/snip-customer-support-info_rhdh-getting-started
Chapter 18. Managing TLS certificates
Chapter 18. Managing TLS certificates Streams for Apache Kafka supports TLS for encrypted communication between Kafka and Streams for Apache Kafka components. Streams for Apache Kafka establishes encrypted TLS connections for communication between the following components when using Kafka in KRaft mode: Kafka brokers Kafka controllers Kafka brokers and controllers Streams for Apache Kafka operators and Kafka Cruise Control and Kafka brokers Kafka Exporter and Kafka brokers Connections between clients and Kafka brokers use listeners that you must configure to use TLS-encrypted communication. You configure these listeners in the Kafka custom resource and each listener name and port number must be unique within the cluster. Communication between Kafka brokers and Kafka clients is encrypted according to how the tls property is configured for the listener. For more information, see Chapter 15, Setting up client access to a Kafka cluster . The following diagram shows the connections for secure communication. Figure 18.1. KRaft-based Kafka communication secured by TLS encryption The ports shown in the diagram are used as follows: Control plane listener (9090) The internal control plane listener on port 9090 facilitates interbroker communication between Kafka controllers and broker-to-controller communication. Additionally, the Cluster Operator communicates with the controllers through the listener. This listener is not accessible to Kafka clients. Replication listener (9091) Data replication between brokers, as well as internal connections to the brokers from Streams for Apache Kafka operators, Cruise Control, and the Kafka Exporter, use the replication listener on port 9091. This listener is not accessible to Kafka clients. Listeners for client connections (9092 or higher) For TLS-encrypted communication (through configuration of the listener), internal and external clients connect to Kafka brokers. External clients (producers and consumers) connect to the Kafka brokers through the advertised listener port. Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). If you are using ZooKeeper for cluster management, there are TLS connections between ZooKeeper and Kafka brokers and Streams for Apache Kafka operators. The following diagram shows the connections for secure communication when using ZooKeeper. Figure 18.2. Kafka and ZooKeeper communication secured by TLS encryption The ZooKeeper ports are used as follows: ZooKeeper port (2181) ZooKeeper port for connection to Kafka brokers. Additionally, the Cluster Operator communicates with ZooKeeper through this port. ZooKeeper internodal communication port (2888) ZooKeeper port for internodal communication between ZooKeeper nodes. ZooKeeper leader election port (3888) ZooKeeper port for leader election among ZooKeeper nodes in a ZooKeeper cluster. Node status monitoring using the KafkaAgent (8443) Streams for Apache Kafka includes a component called KafkaAgent that runs inside each Kafka node. The agent is responsible for collecting and providing node-specific information, such as current state and readiness, to the Cluster Operator. It listens on port 8443 for secure HTTPS connections and exposes this information through a REST API, which the Cluster Operator uses to retrieve data from the nodes. 18.1. Internal cluster CA and clients CA To support encryption, each Streams for Apache Kafka component needs its own private keys and public key certificates. All component certificates are signed by an internal CA (certificate authority) called the cluster CA . CA (Certificate Authority) certificates are generated by the Cluster Operator to verify the identities of components and clients. Similarly, each Kafka client application connecting to Streams for Apache Kafka using mTLS needs to use private keys and certificates. A second internal CA, named the clients CA , is used to sign certificates for the Kafka clients. Both the cluster CA and clients CA have a self-signed public key certificate. Kafka brokers are configured to trust certificates signed by either the cluster CA or clients CA. Components that clients do not need to connect to, such as ZooKeeper, only trust certificates signed by the cluster CA. Unless TLS encryption for external listeners is disabled, client applications must trust certificates signed by the cluster CA. This is also true for client applications that perform mTLS authentication. By default, Streams for Apache Kafka automatically generates and renews CA certificates issued by the cluster CA or clients CA. You can configure the management of these CA certificates using Kafka.spec.clusterCa and Kafka.spec.clientsCa properties. Note If you don't want to use the CAs generated by the Cluster Operator, you can install your own cluster and clients CA certificates . Any certificates you provide are not renewed by the Cluster Operator. 18.2. Secrets generated by the operators The Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within a cluster. It also sets up other TLS certificates if you want to enable encryption or mTLS authentication between Kafka brokers and clients. Secrets are created when custom resources are deployed, such as Kafka and KafkaUser . Streams for Apache Kafka uses these secrets to store private and public key certificates for Kafka clusters, clients, and users. The secrets are used for establishing TLS encrypted connections between Kafka brokers, and between brokers and clients. They are also used for mTLS authentication. Cluster and clients secrets are always pairs: one contains the public key and one contains the private key. Cluster secret A cluster secret contains the cluster CA to sign Kafka broker certificates. Connecting clients use the certificate to establish a TLS encrypted connection with a Kafka cluster. The certificate verifies broker identity. Client secret A client secret contains the clients CA for a user to sign its own client certificate. This allows mutual authentication against the Kafka cluster. The broker validates a client's identity through the certificate. User secret A user secret contains a private key and certificate. The secret is created and signed by the clients CA when a new user is created. The key and certificate are used to authenticate and authorize the user when accessing the cluster. Note You can provide Kafka listener certificates for TLS listeners or external listeners that have TLS encryption enabled. Use Kafka listener certificates to incorporate the security infrastructure you already have in place. 18.2.1. TLS authentication using keys and certificates in PEM or PKCS #12 format The secrets created by Streams for Apache Kafka provide private keys and certificates in PEM (Privacy Enhanced Mail) and PKCS #12 (Public-Key Cryptography Standards) formats. PEM and PKCS #12 are OpenSSL-generated key formats for TLS communications using the SSL protocol. You can configure mutual TLS (mTLS) authentication that uses the credentials contained in the secrets generated for a Kafka cluster and user. To set up mTLS, you must first do the following: Configure your Kafka cluster with a listener that uses mTLS Create a KafkaUser that provides client credentials for mTLs When you deploy a Kafka cluster, a <cluster_name>-cluster-ca-cert secret is created with public key to verify the cluster. You use the public key to configure a truststore for the client. When you create a KafkaUser , a <kafka_user_name> secret is created with the keys and certificates to verify the user (client). Use these credentials to configure a keystore for the client. With the Kafka cluster and client set up to use mTLS, you extract credentials from the secrets and add them to your client configuration. PEM keys and certificates For PEM, you add the following to your client configuration: Truststore ca.crt from the <cluster_name>-cluster-ca-cert secret, which is the CA certificate for the cluster. Keystore user.crt from the <kafka_user_name> secret, which is the public certificate of the user. user.key from the <kafka_user_name> secret, which is the private key of the user. PKCS #12 keys and certificates For PKCS #12, you add the following to your client configuration: Truststore ca.p12 from the <cluster_name>-cluster-ca-cert secret, which is the CA certificate for the cluster. ca.password from the <cluster_name>-cluster-ca-cert secret, which is the password to access the public cluster CA certificate. Keystore user.p12 from the <kafka_user_name> secret, which is the public key certificate of the user. user.password from the <kafka_user_name> secret, which is the password to access the public key certificate of the Kafka user. PKCS #12 is supported by Java, so you can add the values of the certificates directly to your Java client configuration. You can also reference the certificates from a secure storage location. With PEM files, you must add the certificates directly to the client configuration in single-line format. Choose a format that's suitable for establishing TLS connections between your Kafka cluster and client. Use PKCS #12 if you are unfamiliar with PEM. Note All keys are 2048 bits in size and, by default, are valid for 365 days from the initial generation. You can change the validity period . 18.2.2. Secrets generated by the Cluster Operator The Cluster Operator generates the following certificates, which are saved as secrets in the OpenShift cluster. Streams for Apache Kafka uses these secrets by default. The cluster CA and clients CA have separate secrets for the private key and public key. <cluster_name> -cluster-ca Contains the private key of the cluster CA. Streams for Apache Kafka and Kafka components use the private key to sign server certificates. <cluster_name> -cluster-ca-cert Contains the public key of the cluster CA. Kafka clients use the public key to verify the identity of the Kafka brokers they are connecting to with TLS server authentication. <cluster_name> -clients-ca Contains the private key of the clients CA. Kafka clients use the private key to sign new user certificates for mTLS authentication when connecting to Kafka brokers. <cluster_name> -clients-ca-cert Contains the public key of the clients CA. Kafka brokers use the public key to verify the identity of clients accessing the Kafka brokers when mTLS authentication is used. Secrets for communication between Streams for Apache Kafka components contain a private key and a public key certificate signed by the cluster CA. <cluster_name> -kafka-brokers Contains the private and public keys for Kafka brokers. <cluster_name> -zookeeper-nodes Contains the private and public keys for ZooKeeper nodes. <cluster_name> -cluster-operator-certs Contains the private and public keys for encrypting communication between the Cluster Operator and Kafka or ZooKeeper. <cluster_name> -entity-topic-operator-certs Contains the private and public keys for encrypting communication between the Topic Operator and Kafka or ZooKeeper. <cluster_name> -entity-user-operator-certs Contains the private and public keys for encrypting communication between the User Operator and Kafka or ZooKeeper. <cluster_name> -cruise-control-certs Contains the private and public keys for encrypting communication between Cruise Control and Kafka or ZooKeeper. <cluster_name> -kafka-exporter-certs Contains the private and public keys for encrypting communication between Kafka Exporter and Kafka or ZooKeeper. Note You can provide your own server certificates and private keys to connect to Kafka brokers using Kafka listener certificates rather than certificates signed by the cluster CA. 18.2.3. Cluster CA secrets Cluster CA secrets are managed by the Cluster Operator in a Kafka cluster. Only the <cluster_name> -cluster-ca-cert secret is required by clients. All other cluster secrets are accessed by Streams for Apache Kafka components. You can enforce this using OpenShift role-based access controls, if necessary. Note The CA certificates in <cluster_name> -cluster-ca-cert must be trusted by Kafka client applications so that they validate the Kafka broker certificates when connecting to Kafka brokers over TLS. Table 18.1. Fields in the <cluster_name>-cluster-ca secret Field Description ca.key The current private key for the cluster CA. Table 18.2. Fields in the <cluster_name>-cluster-ca-cert secret Field Description ca.p12 PKCS #12 store for storing certificates and keys. ca.password Password for protecting the PKCS #12 store. ca.crt The current certificate for the cluster CA. Table 18.3. Fields in the <cluster_name>-kafka-brokers secret Field Description <cluster_name> -kafka- <num> .p12 PKCS #12 store for storing certificates and keys. <cluster_name> -kafka- <num> .password Password for protecting the PKCS #12 store. <cluster_name> -kafka- <num> .crt Certificate for a Kafka broker pod <num> . Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . <cluster_name> -kafka- <num> .key Private key for a Kafka broker pod <num> . Table 18.4. Fields in the <cluster_name>-zookeeper-nodes secret Field Description <cluster_name> -zookeeper- <num> .p12 PKCS #12 store for storing certificates and keys. <cluster_name> -zookeeper- <num> .password Password for protecting the PKCS #12 store. <cluster_name> -zookeeper- <num> .crt Certificate for ZooKeeper node <num> . Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . <cluster_name> -zookeeper- <num> .key Private key for ZooKeeper pod <num> . Table 18.5. Fields in the <cluster_name>-cluster-operator-certs secret Field Description cluster-operator.p12 PKCS #12 store for storing certificates and keys. cluster-operator.password Password for protecting the PKCS #12 store. cluster-operator.crt Certificate for mTLS communication between the Cluster Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . cluster-operator.key Private key for mTLS communication between the Cluster Operator and Kafka or ZooKeeper. Table 18.6. Fields in the <cluster_name>-entity-topic-operator-certs secret Field Description entity-operator.p12 PKCS #12 store for storing certificates and keys. entity-operator.password Password for protecting the PKCS #12 store. entity-operator.crt Certificate for mTLS communication between the Topic Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . entity-operator.key Private key for mTLS communication between the Topic Operator and Kafka or ZooKeeper. Table 18.7. Fields in the <cluster_name>-entity-user-operator-certs secret Field Description entity-operator.p12 PKCS #12 store for storing certificates and keys. entity-operator.password Password for protecting the PKCS #12 store. entity-operator.crt Certificate for mTLS communication between the User Operator and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . entity-operator.key Private key for mTLS communication between the User Operator and Kafka or ZooKeeper. Table 18.8. Fields in the <cluster_name>-cruise-control-certs secret Field Description cruise-control.p12 PKCS #12 store for storing certificates and keys. cruise-control.password Password for protecting the PKCS #12 store. cruise-control.crt Certificate for mTLS communication between Cruise Control and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . cruise-control.key Private key for mTLS communication between the Cruise Control and Kafka or ZooKeeper. Table 18.9. Fields in the <cluster_name>-kafka-exporter-certs secret Field Description kafka-exporter.p12 PKCS #12 store for storing certificates and keys. kafka-exporter.password Password for protecting the PKCS #12 store. kafka-exporter.crt Certificate for mTLS communication between Kafka Exporter and Kafka or ZooKeeper. Signed by a current or former cluster CA private key in <cluster_name> -cluster-ca . kafka-exporter.key Private key for mTLS communication between the Kafka Exporter and Kafka or ZooKeeper. 18.2.4. Clients CA secrets Clients CA secrets are managed by the Cluster Operator in a Kafka cluster. The certificates in <cluster_name> -clients-ca-cert are those which the Kafka brokers trust. The <cluster_name> -clients-ca secret is used to sign the certificates of client applications. This secret must be accessible to the Streams for Apache Kafka components and for administrative access if you are intending to issue application certificates without using the User Operator. You can enforce this using OpenShift role-based access controls, if necessary. Table 18.10. Fields in the <cluster_name>-clients-ca secret Field Description ca.key The current private key for the clients CA. Table 18.11. Fields in the <cluster_name>-clients-ca-cert secret Field Description ca.p12 PKCS #12 store for storing certificates and keys. ca.password Password for protecting the PKCS #12 store. ca.crt The current certificate for the clients CA. 18.2.5. User secrets generated by the User Operator User secrets are managed by the User Operator. When a user is created using the User Operator, a secret is generated using the name of the user. Table 18.12. Fields in the user_name secret Secret name Field within secret Description <user_name> user.p12 PKCS #12 store for storing certificates and keys. user.password Password for protecting the PKCS #12 store. user.crt Certificate for the user, signed by the clients CA user.key Private key for the user 18.2.6. Adding labels and annotations to cluster CA secrets By configuring the clusterCaCert template property in the Kafka custom resource, you can add custom labels and annotations to the Cluster CA secrets created by the Cluster Operator. Labels and annotations are useful for identifying objects and adding contextual information. You configure template properties in Streams for Apache Kafka custom resources. Example template customization to add labels and annotations to secrets apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... 18.2.7. Disabling ownerReference in the CA secrets By default, the cluster and clients CA secrets are created with an ownerReference property that is set to the Kafka custom resource. This means that, when the Kafka custom resource is deleted, the CA secrets are also deleted (garbage collected) by OpenShift. If you want to reuse the CA for a new cluster, you can disable the ownerReference by setting the generateSecretOwnerReference property for the cluster and clients CA secrets to false in the Kafka configuration. When the ownerReference is disabled, CA secrets are not deleted by OpenShift when the corresponding Kafka custom resource is deleted. Example Kafka configuration with disabled ownerReference for cluster and clients CAs apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false # ... Additional resources CertificateAuthority schema reference 18.3. Certificate renewal and validity periods Cluster CA and clients CA certificates are only valid for a limited time period, known as the validity period. This is usually defined as a number of days since the certificate was generated. For CA certificates automatically created by the Cluster Operator, configure the validity period for certificates in the kafka resource using the following properties: Kafka.spec.clusterCa.validityDays for Cluster CA certificates Kafka.spec.clientsCa.validityDays for Clients CA certificates The default validity period for both certificates is 365 days. Manually-installed CA certificates should have their own validity periods defined. When a CA certificate expires, components and clients that still trust that certificate do not accept connections from peers whose certificates were signed by the CA private key. The components and clients need to trust the new CA certificate instead. To allow the renewal of CA certificates without a loss of service, the Cluster Operator initiates certificate renewal before the old CA certificates expire. Configure the renewal period of the certificates created by the Cluster Operator in the kafka resource using the following properties: Kafka.spec.clusterCa.renewalDays for Cluster CA certificates Kafka.spec.clientsCa.renewalDays for Clients CA certificates The default renewal period for both certificates is 30 days. The renewal period is measured backwards, from the expiry date of the current certificate. Validity period against renewal period To schedule the renewal period at a convenient time, use maintenance time windows . To make a change to the validity and renewal periods after creating the Kafka cluster, configure and apply the Kafka custom resource, and manually renew the CA certificates . If you do not manually renew the certificates, the new periods will be used the time the certificate is renewed automatically. Example Kafka configuration for certificate validity and renewal periods apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true # ... The behavior of the Cluster Operator during the renewal period depends on the settings for the generateCertificateAuthority certificate generation properties for the cluster CA and clients CA. true If the properties are set to true , a CA certificate is generated automatically by the Cluster Operator, and renewed automatically within the renewal period. false If the properties are set to false , a CA certificate is not generated by the Cluster Operator. Use this option if you are installing your own certificates . 18.3.1. Renewing automatically generated CA Certificates When it's time to renew CA certificates, the Cluster Operator follows these steps: Generates a new CA certificate, but retains the existing key. The new certificate replaces the old one with the name ca.crt within the corresponding Secret . Generates new client certificates (for ZooKeeper nodes, Kafka brokers, and the Entity Operator). This is not strictly necessary because the signing key has not changed, but it keeps the validity period of the client certificate in sync with the CA certificate. Restarts ZooKeeper nodes to trust the new CA certificate and use the new client certificates. Restarts Kafka brokers to trust the new CA certificate and use the new client certificates. Restarts the Topic Operator and User Operator to trust the new CA certificate and use the new client certificates. User certificates are signed by the clients CA. The User Operator handles renewing user certificates when the client's CA is renewed. 18.3.2. Renewing client certificates The Cluster Operator is not aware of the client applications using the Kafka cluster. You must ensure clients continue to work after certificate renewal. The renewal process depends on how the clients are configured. When connecting to the cluster, and to ensure they operate correctly, client applications must include the following configuration: Truststore credentials from the <cluster_name>-cluster-ca-cert secret to verify the identity of the Kafka cluster. Keystore credentials from the <user_name> secret to connect to verify the user when connecting to the Kafka cluster. The user secret provides credentials in PEM and PKCS #12 format, or it can provide a password when using SCRAM-SHA authentication. The User Operator creates the user credentials when a user is created. For an example of adding certificates to client configuration, see Section 16.4.2, "Securing user access to Kafka" . If you are provisioning client certificates and keys manually, you must generate new client certificates and ensure the new certificates are used by clients within the renewal period. Failure to do this by the end of the renewal period could result in client applications being unable to connect to the cluster. Note For workloads running inside the same OpenShift cluster and namespace, secrets can be mounted as a volume so the client pods construct their keystores and truststores from the current state of the secrets. For more details on this procedure, see Configuring internal clients to trust the cluster CA . 18.3.3. Scheduling maintenance time windows Schedule certificate renewal updates by the Cluster Operator to Kafka or ZooKeeper clusters for minimal impact on client applications. Use time windows in conjunction with the renewal periods of the CA certificates created by the Cluster Operator ( Kafka.spec.clusterCa.renewalDays and Kafka.spec.clusterCa.renewalDays ). Updates are usually triggered by changes to the Kafka resource by the user or through user tooling. Rolling restarts for certificate expiration may occur without Kafka resource changes. While unscheduled restarts shouldn't affect service availability, they could impact the performance of client applications. Maintenance time windows allow scheduling of these updates for convenient times. Configure maintenance time windows as follows: Configure an array of strings using the Kafka.spec.maintenanceTimeWindows property of the Kafka resource. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time) The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays. Example maintenance time window configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... maintenanceTimeWindows: - "* * 0-1 ? * SUN,MON,TUE,WED,THU *" #... Note The Cluster Operator doesn't adhere strictly to the given time windows for maintenance operations. Maintenance operations are triggered by the first reconciliation that occurs within the specified time window. If the time window is shorter than the interval between reconciliations, there's a risk that the reconciliation may happen outside of the time window, Therefore, maintenance time windows must be at least as long as the interval between reconciliations. 18.3.4. Manually renewing Cluster Operator-managed CA certificates Cluster and clients CA certificates generated by the Cluster Operator auto-renew at the start of their respective certificate renewal periods. However, you can use the strimzi.io/force-renew annotation to manually renew one or both of these certificates before the certificate renewal period starts. You might do this for security reasons, or if you have changed the renewal or validity periods for the certificates . A renewed certificate uses the same private key as the old certificate. Note If you are using your own CA certificates, the force-renew annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator must be deployed. A Kafka cluster in which CA certificates and private keys are installed. The OpenSSL TLS management tool to check the period of validity for CA certificates. In this procedure, we use a Kafka cluster named my-cluster within the my-project namespace. Procedure Apply the strimzi.io/force-renew annotation to the secret that contains the CA certificate that you want to renew. Renewing the Cluster CA secret oc annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew="true" Renewing the Clients CA secret oc annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew="true" At the reconciliation, the Cluster Operator generates new certificates. If maintenance time windows are configured, the Cluster Operator generates the new CA certificate at the first reconciliation within the maintenance time window. Check the period of validity for the new CA certificates. Checking the period of validity for the new cluster CA certificate oc get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates Checking the period of validity for the new clients CA certificate oc get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates The command returns a notBefore and notAfter date, which is the valid start and end date for the CA certificate. Update client configurations to trust the new cluster CA certificate. See: Section 18.4, "Configuring internal clients to trust the cluster CA" Section 18.5, "Configuring external clients to trust the cluster CA" 18.3.5. Manually recovering from expired Cluster Operator-managed CA certificates The Cluster Operator automatically renews the cluster and clients CA certificates when their renewal periods begin. Nevertheless, unexpected operational problems or disruptions may prevent the renewal process, such as prolonged downtime of the Cluster Operator or unavailability of the Kafka cluster. If CA certificates expire, Kafka cluster components cannot communicate with each other and the Cluster Operator cannot renew the CA certificates without manual intervention. To promptly perform a recovery, follow the steps outlined in this procedure in the order given. You can recover from expired cluster and clients CA certificates. The process involves deleting the secrets containing the expired certificates so that new ones are generated by the Cluster Operator. For more information on the secrets managed in Streams for Apache Kafka, see Section 18.2.2, "Secrets generated by the Cluster Operator" . Note If you are using your own CA certificates and they expire, the process is similar, but you need to renew the CA certificates rather than use certificates generated by the Cluster Operator. Prerequisites The Cluster Operator must be deployed. A Kafka cluster in which CA certificates and private keys are installed. The OpenSSL TLS management tool to check the period of validity for CA certificates. In this procedure, we use a Kafka cluster named my-cluster within the my-project namespace. Procedure Delete the secret containing the expired CA certificate. Deleting the Cluster CA secret oc delete secret my-cluster-cluster-ca-cert -n my-project Deleting the Clients CA secret oc delete secret my-cluster-clients-ca-cert -n my-project Wait for the Cluster Operator to generate new certificates. A new CA cluster certificate to verify the identity of the Kafka brokers is created in a secret of the same name ( my-cluster-cluster-ca-cert ). A new CA clients certificate to verify the identity of Kafka users is created in a secret of the same name ( my-cluster-clients-ca-cert ). Check the period of validity for the new CA certificates. Checking the period of validity for the new cluster CA certificate oc get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates Checking the period of validity for the new clients CA certificate oc get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -dates The command returns a notBefore and notAfter date, which is the valid start and end date for the CA certificate. Delete the component pods and secrets that use the CA certificates. Delete the ZooKeeper secret. Wait for the Cluster Operator to detect the missing ZooKeeper secret and recreate it. Delete all ZooKeeper pods. Delete the Kafka secret. Wait for the Cluster Operator to detect the missing Kafka secret and recreate it. Delete all Kafka pods. If you are only recovering the clients CA certificate, you only need to delete the Kafka secret and pods. You can use the following oc command to find resources and also verify that they have been removed. oc get <resource_type> --all-namespaces | grep <kafka_cluster_name> Replace <resource_type> with the type of the resource, such as Pod or Secret . Wait for the Cluster Operator to detect the missing Kafka and ZooKeeper pods and recreate them with the updated CA certificates. On reconciliation, the Cluster Operator automatically updates other components to trust the new CA certificates. Verify that there are no issues related to certificate validation in the Cluster Operator log. Update client configurations to trust the new cluster CA certificate. See: Section 18.4, "Configuring internal clients to trust the cluster CA" Section 18.5, "Configuring external clients to trust the cluster CA" 18.3.6. Replacing private keys used by Cluster Operator-managed CA certificates You can replace the private keys used by the cluster CA and clients CA certificates generated by the Cluster Operator. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key. Note If you are using your own CA certificates, the force-replace annotation cannot be used. Instead, follow the procedure for renewing your own CA certificates . Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-replace annotation to the Secret that contains the private key that you want to renew. Table 18.13. Commands for replacing private keys Private key for Secret Annotate command Cluster CA <cluster_name>-cluster-ca oc annotate secret <cluster_name>-cluster-ca strimzi.io/force-replace="true" Clients CA <cluster_name>-clients-ca oc annotate secret <cluster_name>-clients-ca strimzi.io/force-replace="true" At the reconciliation the Cluster Operator will: Generate a new private key for the Secret that you annotated Generate a new CA certificate If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Additional resources Section 18.2, "Secrets generated by the operators" Section 18.3.3, "Scheduling maintenance time windows" 18.4. Configuring internal clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides inside the OpenShift cluster - connecting to a TLS listener - to trust the cluster CA certificate. The easiest way to achieve this for an internal client is to use a volume mount to access the Secrets containing the necessary certificates and keys. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to mount the Cluster Secret that verifies the identity of the Kafka cluster to the client pod. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application inside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. The client application must be running in the same namespace as the Kafka resource. Using PKCS #12 format (.p12) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Here we're mounting the following: The PKCS #12 file into an exact path, which can be configured The password into an environment variable, where it can be used for Java configuration Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS for encryption (with or without mTLS authentication). security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Mount the cluster Secret as a volume when defining the client pod. For example: kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert Use the extracted certificate to configure a TLS connection in clients that use certificates in X.509 format. 18.5. Configuring external clients to trust the cluster CA This procedure describes how to configure a Kafka client that resides outside the OpenShift cluster - connecting to an external listener - to trust the cluster CA certificate. Follow this procedure when setting up the client and during the renewal period, when the old clients CA certificate is replaced. Follow the steps to configure trust certificates that are signed by the cluster CA for Java-based Kafka Producer, Consumer, and Streams APIs. Choose the steps to follow according to the certificate format of the cluster CA: PKCS #12 ( .p12 ) or PEM ( .crt ). The steps describe how to obtain the certificate from the Cluster Secret that verifies the identity of the Kafka cluster. Important The <cluster_name> -cluster-ca-cert secret contains more than one CA certificate during the CA certificate renewal period. Clients must add all of them to their truststores. Prerequisites The Cluster Operator must be running. There needs to be a Kafka resource within the OpenShift cluster. You need a Kafka client application outside the OpenShift cluster that will connect using TLS, and needs to trust the cluster CA certificate. Using PKCS #12 format (.p12) Extract the cluster CA certificate and password from the <cluster_name> -cluster-ca-cert Secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password Replace <cluster_name> with the name of the Kafka cluster. Configure the Kafka client with the following properties: A security protocol option: security.protocol: SSL when using TLS. security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. ssl.truststore.location with the truststore location where the certificates were imported. ssl.truststore.password with the password for accessing the truststore. This property can be omitted if it is not needed by the truststore. ssl.truststore.type=PKCS12 to identify the truststore type. Using PEM format (.crt) Extract the cluster CA certificate from the <cluster_name> -cluster-ca-cert secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate to configure a TLS connection in clients that use certificates in X.509 format. 18.6. Using your own CA certificates and private keys Install and use your own CA certificates and private keys instead of using the defaults generated by the Cluster Operator. You can replace the cluster and clients CA certificates and private keys. You can switch to using your own CA certificates and private keys in the following ways: Install your own CA certificates and private keys before deploying your Kafka cluster Replace the default CA certificates and private keys with your own after deploying a Kafka cluster The steps to replace the default CA certificates and private keys after deploying a Kafka cluster are the same as those used to renew your own CA certificates and private keys. If you use your own certificates, they won't be renewed automatically. You need to renew the CA certificates and private keys before they expire. Renewal options: Renew the CA certificates only Renew CA certificates and private keys (or replace the defaults) 18.6.1. Installing your own CA certificates and private keys Install your own CA certificates and private keys instead of using the cluster and clients CA certificates and private keys generated by the Cluster Operator. By default, Streams for Apache Kafka uses the following cluster CA and clients CA secrets , which are renewed automatically. Cluster CA secrets <cluster_name>-cluster-ca <cluster_name>-cluster-ca-cert Clients CA secrets <cluster_name>-clients-ca <cluster_name>-clients-ca-cert To install your own certificates, use the same names. Prerequisites The Cluster Operator is running. A Kafka cluster is not yet deployed. If you have already deployed a Kafka cluster, you can replace the default CA certificates with your own . Your own X.509 certificates and keys in PEM format for the cluster CA or clients CA. If you want to use a cluster or clients CA which is not a Root CA, you have to include the whole chain in the certificate file. The chain should be in the following order: The cluster or clients CA One or more intermediate CAs The root CA All CAs in the chain should be configured using the X509v3 Basic Constraints extension. Basic Constraints limit the path length of a certificate chain. The OpenSSL TLS management tool for converting certificates. Before you begin The Cluster Operator generates keys and certificates in PEM (Privacy Enhanced Mail) and PKCS #12 (Public-Key Cryptography Standards) formats. You can add your own certificates in either format. Some applications cannot use PEM certificates and support only PKCS #12 certificates. If you don't have a cluster certificate in PKCS #12 format, use the OpenSSL TLS management tool to generate one from your ca.crt file. Example certificate generation command openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt Replace <P12_password> with your own password. Procedure Create a new secret that contains the CA certificate. Client secret creation with a certificate in PEM format only oc create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt Cluster secret creation with certificates in PEM and PKCS #12 format oc create secret generic <cluster_name>-cluster-ca-cert \ --from-file=ca.crt=ca.crt \ --from-file=ca.p12=ca.p12 \ --from-literal=ca.password= P12-PASSWORD Replace <cluster_name> with the name of your Kafka cluster. Create a new secret that contains the private key. oc create secret generic <ca_key_secret> --from-file=ca.key=ca.key Label the secrets. oc label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster="<cluster_name>" oc label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster="<cluster_name>" Label strimzi.io/kind=Kafka identifies the Kafka custom resource. Label strimzi.io/cluster="<cluster_name>" identifies the Kafka cluster. Annotate the secrets oc annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation="<ca_certificate_generation>" oc annotate secret <ca_key_secret> strimzi.io/ca-key-generation="<ca_key_generation>" Annotation strimzi.io/ca-cert-generation="<ca_certificate_generation>" defines the generation of a new CA certificate. Annotation strimzi.io/ca-key-generation="<ca_key_generation>" defines the generation of a new CA key. Start from 0 (zero) as the incremental value ( strimzi.io/ca-cert-generation=0 ) for your own CA certificate. Set a higher incremental value when you renew the certificates. Create the Kafka resource for your cluster, configuring either the Kafka.spec.clusterCa or the Kafka.spec.clientsCa object to not use generated CAs. Example fragment Kafka resource configuring the cluster CA to use certificates you supply for yourself kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # ... clusterCa: generateCertificateAuthority: false Additional resources Section 18.6.2, "Renewing your own CA certificates" Section 18.6.3, "Renewing or replacing CA certificates and private keys with your own" Section 16.1.4, "Using custom listener certificates for TLS encryption" 18.6.2. Renewing your own CA certificates If you are using your own CA certificates, you need to renew them manually. The Cluster Operator will not renew them automatically. Renew the CA certificates in the renewal period before they expire. Perform the steps in this procedure when you are renewing CA certificates and continuing with the same private key. If you are renewing your own CA certificates and private keys, see Section 18.6.3, "Renewing or replacing CA certificates and private keys with your own" . The procedure describes the renewal of CA certificates in PEM format. Prerequisites The Cluster Operator is running. You have new cluster or clients X.509 certificates in PEM format. Procedure Update the Secret for the CA certificate. Edit the existing secret to add the new CA certificate and update the certificate generation annotation value. oc edit secret <ca_certificate_secret_name> <ca_certificate_secret_name> is the name of the Secret , which is <kafka_cluster_name> -cluster-ca-cert for the cluster CA certificate and <kafka_cluster_name> -clients-ca-cert for the clients CA certificate. The following example shows a secret for a cluster CA certificate that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA certificate apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 Current base64-encoded CA certificate 2 Current CA certificate generation annotation value Encode your new CA certificate into base64. cat <path_to_new_certificate> | base64 Update the CA certificate. Copy the base64-encoded CA certificate from the step as the value for the ca.crt property under data . Increase the value of the CA certificate generation annotation. Update the strimzi.io/ca-cert-generation annotation with a higher incremental value. For example, change strimzi.io/ca-cert-generation=0 to strimzi.io/ca-cert-generation=1 . If the Secret is missing the annotation, the value is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The strimzi.io/ca-cert-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA certificate and certificate generation annotation value. Example secret configuration updated with a new CA certificate apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 New base64-encoded CA certificate 2 New CA certificate generation annotation value On the reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate. If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the maintenance time window. 18.6.3. Renewing or replacing CA certificates and private keys with your own If you are using your own CA certificates and private keys, you need to renew them manually. The Cluster Operator will not renew them automatically. Renew the CA certificates in the renewal period before they expire. You can also use the same procedure to replace the CA certificates and private keys generated by the Streams for Apache Kafka operators with your own. Perform the steps in this procedure when you are renewing or replacing CA certificates and private keys. If you are only renewing your own CA certificates, see Section 18.6.2, "Renewing your own CA certificates" . The procedure describes the renewal of CA certificates and private keys in PEM format. Before going through the following steps, make sure that the CN (Common Name) of the new CA certificate is different from the current one. For example, when the Cluster Operator renews certificates automatically it adds a v<version_number> suffix to identify a version. Do the same with your own CA certificate by adding a different suffix on each renewal. By using a different key to generate a new CA certificate, you retain the current CA certificate stored in the Secret . Prerequisites The Cluster Operator is running. You have new cluster or clients X.509 certificates and keys in PEM format. Procedure Pause the reconciliation of the Kafka custom resource. Annotate the custom resource in OpenShift, setting the pause-reconciliation annotation to true : oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation="true" For example, for a Kafka custom resource named my-cluster : oc annotate Kafka my-cluster strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe Kafka <name_of_custom_resource> The type condition changes to ReconciliationPaused at the lastTransitionTime . Check the settings for the generateCertificateAuthority properties in your Kafka custom resource. If a property is set to false , a CA certificate is not generated by the Cluster Operator. You require this setting if you are using your own certificates. If needed, edit the existing Kafka custom resource and set the generateCertificateAuthority properties to false . oc edit Kafka <name_of_custom_resource> The following example shows a Kafka custom resource with both cluster and clients CA certificates generation delegated to the user. Example Kafka configuration using your own CA certificates apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka # ... spec: # ... clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2 # ... 1 Use your own cluster CA 2 Use your own clients CA Update the Secret for the CA certificate. Edit the existing secret to add the new CA certificate and update the certificate generation annotation value. oc edit secret <ca_certificate_secret_name> <ca_certificate_secret_name> is the name of the Secret , which is <kafka_cluster_name>-cluster-ca-cert for the cluster CA certificate and <kafka_cluster_name>-clients-ca-cert for the clients CA certificate. The following example shows a secret for a cluster CA certificate that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA certificate apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 Current base64-encoded CA certificate 2 Current CA certificate generation annotation value Rename the current CA certificate to retain it. Rename the current ca.crt property under data as ca-<date>.crt , where <date> is the certificate expiry date in the format YEAR-MONTH-DAYTHOUR-MINUTE-SECONDZ . For example ca-2023-01-26T17-32-00Z.crt: . Leave the value for the property as it is to retain the current CA certificate. Encode your new CA certificate into base64. cat <path_to_new_certificate> | base64 Update the CA certificate. Create a new ca.crt property under data and copy the base64-encoded CA certificate from the step as the value for ca.crt property. Increase the value of the CA certificate generation annotation. Update the strimzi.io/ca-cert-generation annotation with a higher incremental value. For example, change strimzi.io/ca-cert-generation=0 to strimzi.io/ca-cert-generation=1 . If the Secret is missing the annotation, the value is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the certificate generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates, set the annotations with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates. The strimzi.io/ca-cert-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA certificate and certificate generation annotation value. Example secret configuration updated with a new CA certificate apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: "1" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque 1 New base64-encoded CA certificate 2 Old base64-encoded CA certificate 3 New CA certificate generation annotation value Update the Secret for the CA key used to sign your new CA certificate. Edit the existing secret to add the new CA key and update the key generation annotation value. oc edit secret <ca_key_name> <ca_key_name> is the name of CA key, which is <kafka_cluster_name>-cluster-ca for the cluster CA key and <kafka_cluster_name>-clients-ca for the clients CA key. The following example shows a secret for a cluster CA key that's associated with a Kafka cluster named my-cluster . Example secret configuration for a cluster CA key apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: "0" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque 1 Current base64-encoded CA key 2 Current CA key generation annotation value Encode the CA key into base64. cat <path_to_new_key> | base64 Update the CA key. Copy the base64-encoded CA key from the step as the value for the ca.key property under data . Increase the value of the CA key generation annotation. Update the strimzi.io/ca-key-generation annotation with a higher incremental value. For example, change strimzi.io/ca-key-generation=0 to strimzi.io/ca-key-generation=1 . If the Secret is missing the annotation, it is treated as 0 , so add the annotation with a value of 1 . When Streams for Apache Kafka generates certificates, the key generation annotation is automatically incremented by the Cluster Operator. For your own CA certificates together with a new CA key, set the annotation with a higher incremental value. The annotation needs a higher value than the one from the current secret so that the Cluster Operator can roll the pods and update the certificates and keys. The strimzi.io/ca-key-generation has to be incremented on each CA certificate renewal. Save the secret with the new CA key and key generation annotation value. Example secret configuration updated with a new CA key apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: "1" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca #... type: Opaque 1 New base64-encoded CA key 2 New CA key generation annotation value Resume from the pause. To resume the Kafka custom resource reconciliation, set the pause-reconciliation annotation to false . oc annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation="false" You can also do the same by removing the pause-reconciliation annotation. oc annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation- On the reconciliation, the Cluster Operator performs a rolling update of ZooKeeper, Kafka, and other components to trust the new CA certificate. When the rolling update is complete, the Cluster Operator will start a new one to generate new server certificates signed by the new CA key. If maintenance time windows are configured, the Cluster Operator will roll the pods at the first reconciliation within the maintenance time window. Wait until the rolling updates to move to the new CA certificate are complete. Remove any outdated certificates from the secret configuration to ensure that the cluster no longer trusts them. oc edit secret <ca_certificate_secret_name> Example secret configuration with the old certificate removed apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... metadata: annotations: strimzi.io/ca-cert-generation: "1" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert #... type: Opaque Start a manual rolling update of your cluster to pick up the changes made to the secret configuration. See Chapter 29, Managing rolling updates .
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: clusterCaCert: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateSecretOwnerReference: false clientsCa: generateSecretOwnerReference: false", "Not Before Not After | | |<--------------- validityDays --------------->| <--- renewalDays --->|", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true clientsCa: renewalDays: 30 validityDays: 365 generateCertificateAuthority: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\" #", "annotate secret my-cluster-cluster-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "annotate secret my-cluster-clients-ca-cert -n my-project strimzi.io/force-renew=\"true\"", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "delete secret my-cluster-cluster-ca-cert -n my-project", "delete secret my-cluster-clients-ca-cert -n my-project", "get secret my-cluster-cluster-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get secret my-cluster-clients-ca-cert -n my-project -o=jsonpath='{.data.ca\\.crt}' | base64 -d | openssl x509 -noout -dates", "get <resource_type> --all-namespaces | grep <kafka_cluster_name>", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/p12 env: - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: my-secret key: my-password volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "kind: Pod apiVersion: v1 metadata: name: client-pod spec: containers: - name: client-name image: client-name volumeMounts: - name: secret-volume mountPath: /data/crt volumes: - name: secret-volume secret: secretName: my-cluster-cluster-ca-cert", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "openssl pkcs12 -export -in ca.crt -nokeys -out ca.p12 -password pass:<P12_password> -caname ca.crt", "create secret generic <cluster_name>-clients-ca-cert --from-file=ca.crt=ca.crt", "create secret generic <cluster_name>-cluster-ca-cert --from-file=ca.crt=ca.crt --from-file=ca.p12=ca.p12 --from-literal=ca.password= P12-PASSWORD", "create secret generic <ca_key_secret> --from-file=ca.key=ca.key", "label secret <ca_certificate_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "label secret <ca_key_secret> strimzi.io/kind=Kafka strimzi.io/cluster=\"<cluster_name>\"", "annotate secret <ca_certificate_secret> strimzi.io/ca-cert-generation=\"<ca_certificate_generation>\"", "annotate secret <ca_key_secret> strimzi.io/ca-key-generation=\"<ca_key_generation>\"", "kind: Kafka version: kafka.strimzi.io/v1beta2 spec: # clusterCa: generateCertificateAuthority: false", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"true\"", "annotate Kafka my-cluster strimzi.io/pause-reconciliation=\"true\"", "describe Kafka <name_of_custom_resource>", "edit Kafka <name_of_custom_resource>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: clusterCa: generateCertificateAuthority: false 1 clientsCa: generateCertificateAuthority: false 2", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 1 metadata: annotations: strimzi.io/ca-cert-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "cat <path_to_new_certificate> | base64", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F... 1 ca-2023-01-26T17-32-00Z.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0F... 2 metadata: annotations: strimzi.io/ca-cert-generation: \"1\" 3 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque", "edit secret <ca_key_name>", "apiVersion: v1 kind: Secret data: ca.key: SA1cKF1GFDzOIiPOIUQBHDNFGDFS... 1 metadata: annotations: strimzi.io/ca-key-generation: \"0\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "cat <path_to_new_key> | base64", "apiVersion: v1 kind: Secret data: ca.key: AB0cKF1GFDzOIiPOIUQWERZJQ0F... 1 metadata: annotations: strimzi.io/ca-key-generation: \"1\" 2 labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca # type: Opaque", "annotate --overwrite Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation=\"false\"", "annotate Kafka <name_of_custom_resource> strimzi.io/pause-reconciliation-", "edit secret <ca_certificate_secret_name>", "apiVersion: v1 kind: Secret data: ca.crt: GCa6LS3RTHeKFiFDGBOUDYFAZ0F metadata: annotations: strimzi.io/ca-cert-generation: \"1\" labels: strimzi.io/cluster: my-cluster strimzi.io/kind: Kafka name: my-cluster-cluster-ca-cert # type: Opaque" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/security-str
Chapter 141. KafkaMirrorMaker2MirrorSpec schema reference
Chapter 141. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Property type Description sourceCluster string The alias of the source cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . targetCluster string The alias of the target cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . sourceConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 source connector. heartbeatConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 heartbeat connector. checkpointConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 checkpoint connector. topicsPattern string A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. topicsBlacklistPattern string The topicsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.topicsExcludePattern . A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. topicsExcludePattern string A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. groupsPattern string A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. groupsBlacklistPattern string The groupsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.groupsExcludePattern . A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. groupsExcludePattern string A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2MirrorSpec-reference
Architecture
Architecture OpenShift Container Platform 4.9 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "openshift-install create ignition-configs --dir USDHOME/testconfig", "cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },", "echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode", "This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service", "\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",", "USD oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m", "oc describe machineconfigs 01-worker-container-runtime | grep Path:", "Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/architecture/index
Chapter 34. Understanding Message Formats
Chapter 34. Understanding Message Formats Abstract Before you can begin programming with Apache Camel, you should have a clear understanding of how messages and message exchanges are modelled. Because Apache Camel can process many message formats, the basic message type is designed to have an abstract format. Apache Camel provides the APIs needed to access and transform the data formats that underly message bodies and message headers. 34.1. Exchanges Overview An exchange object is a wrapper that encapsulates a received message and stores its associated metadata (including the exchange properties ). In addition, if the current message is dispatched to a producer endpoint, the exchange provides a temporary slot to hold the reply (the Out message). An important feature of exchanges in Apache Camel is that they support lazy creation of messages. This can provide a significant optimization in the case of routes that do not require explicit access to messages. Figure 34.1. Exchange Object Passing through a Route Figure 34.1, "Exchange Object Passing through a Route" shows an exchange object passing through a route. In the context of a route, an exchange object gets passed as the argument of the Processor.process() method. This means that the exchange object is directly accessible to the source endpoint, the target endpoint, and all of the processors in between. The Exchange interface The org.apache.camel.Exchange interface defines methods to access In and Out messages, as shown in Example 34.1, "Exchange Methods" . Example 34.1. Exchange Methods For a complete description of the methods in the Exchange interface, see Section 43.1, "The Exchange Interface" . Lazy creation of messages Apache Camel supports lazy creation of In , Out , and Fault messages. This means that message instances are not created until you try to access them (for example, by calling getIn() or getOut() ). The lazy message creation semantics are implemented by the org.apache.camel.impl.DefaultExchange class. If you call one of the no-argument accessors ( getIn() or getOut() ), or if you call an accessor with the boolean argument equal to true (that is, getIn(true) or getOut(true) ), the default method implementation creates a new message instance, if one does not already exist. If you call an accessor with the boolean argument equal to false (that is, getIn(false) or getOut(false) ), the default method implementation returns the current message value. [1] Lazy creation of exchange IDs Apache Camel supports lazy creation of exchange IDs. You can call getExchangeId() on any exchange to obtain a unique ID for that exchange instance, but the ID is generated only when you actually call the method. The DefaultExchange.getExchangeId() implementation of this method delegates ID generation to the UUID generator that is registered with the CamelContext . For details of how to register UUID generators with the CamelContext , see Section 34.4, "Built-In UUID Generators" . 34.2. Messages Overview Message objects represent messages using the following abstract model: Message body Message headers Message attachments The message body and the message headers can be of arbitrary type (they are declared as type Object ) and the message attachments are declared to be of type javax.activation.DataHandler , which can contain arbitrary MIME types. If you need to obtain a concrete representation of the message contents, you can convert the body and headers to another type using the type converter mechanism and, possibly, using the marshalling and unmarshalling mechanism. One important feature of Apache Camel messages is that they support lazy creation of message bodies and headers. In some cases, this means that a message can pass through a route without needing to be parsed at all. The Message interface The org.apache.camel.Message interface defines methods to access the message body, message headers and message attachments, as shown in Example 34.2, "Message Interface" . Example 34.2. Message Interface For a complete description of the methods in the Message interface, see Section 44.1, "The Message Interface" . Lazy creation of bodies, headers, and attachments Apache Camel supports lazy creation of bodies, headers, and attachments. This means that the objects that represent a message body, a message header, or a message attachment are not created until they are needed. For example, consider the following route that accesses the foo message header from the In message: In this route, if we assume that the component referenced by SourceURL supports lazy creation, the In message headers are not actually parsed until the header("foo") call is executed. At that point, the underlying message implementation parses the headers and populates the header map. The message body is not parsed until you reach the end of the route, at the to(" TargetURL ") call. At that point, the body is converted into the format required for writing it to the target endpoint, TargetURL . By waiting until the last possible moment before populating the bodies, headers, and attachments, you can ensure that unnecessary type conversions are avoided. In some cases, you can completely avoid parsing. For example, if a route contains no explicit references to message headers, a message could traverse the route without ever parsing the headers. Whether or not lazy creation is implemented in practice depends on the underlying component implementation. In general, lazy creation is valuable for those cases where creating a message body, a message header, or a message attachment is expensive. For details about implementing a message type that supports lazy creation, see Section 44.2, "Implementing the Message Interface" . Lazy creation of message IDs Apache Camel supports lazy creation of message IDs. That is, a message ID is generated only when you actually call the getMessageId() method. The DefaultExchange.getExchangeId() implementation of this method delegates ID generation to the UUID generator that is registered with the CamelContext . Some endpoint implementations would call the getMessageId() method implicitly, if the endpoint implements a protocol that requires a unique message ID. In particular, JMS messages normally include a header containing unique message ID, so the JMS component automatically calls getMessageId() to obtain the message ID (this is controlled by the messageIdEnabled option on the JMS endpoint). For details of how to register UUID generators with the CamelContext , see Section 34.4, "Built-In UUID Generators" . Initial message format The initial format of an In message is determined by the source endpoint, and the initial format of an Out message is determined by the target endpoint. If lazy creation is supported by the underlying component, the message remains unparsed until it is accessed explicitly by the application. Most Apache Camel components create the message body in a relatively raw form - for example, representing it using types such as byte[] , ByteBuffer , InputStream , or OutputStream . This ensures that the overhead required for creating the initial message is minimal. Where more elaborate message formats are required components usually rely on type converters or marshalling processors . Type converters It does not matter what the initial format of the message is, because you can easily convert a message from one format to another using the built-in type converters (see Section 34.3, "Built-In Type Converters" ). There are various methods in the Apache Camel API that expose type conversion functionality. For example, the convertBodyTo(Class type) method can be inserted into a route to convert the body of an In message, as follows: Where the body of the In message is converted to a java.lang.String . The following example shows how to append a string to the end of the In message body: Where the message body is converted to a string format before appending a string to the end. It is not necessary to convert the message body explicitly in this example. You can also use: Where the append() method automatically converts the message body to a string before appending its argument. Type conversion methods in Message The org.apache.camel.Message interface exposes some methods that perform type conversion explicitly: getBody(Class<T> type) - Returns the message body as type, T . getHeader(String name, Class<T> type) - Returns the named header value as type, T . For the complete list of supported conversion types, see Section 34.3, "Built-In Type Converters" . Converting to XML In addition to supporting conversion between simple types (such as byte[] , ByteBuffer , String , and so on), the built-in type converter also supports conversion to XML formats. For example, you can convert a message body to the org.w3c.dom.Document type. This conversion is more expensive than the simple conversions, because it involves parsing the entire message and then creating a tree of nodes to represent the XML document structure. You can convert to the following XML document types: org.w3c.dom.Document javax.xml.transform.sax.SAXSource XML type conversions have narrower applicability than the simpler conversions. Because not every message body conforms to an XML structure, you have to remember that this type conversion might fail. On the other hand, there are many scenarios where a router deals exclusively with XML message types. Marshalling and unmarshalling Marshalling involves converting a high-level format to a low-level format, and unmarshalling involves converting a low-level format to a high-level format. The following two processors are used to perform marshalling or unmarshalling in a route: marshal() unmarshal() For example, to read a serialized Java object from a file and unmarshal it into a Java object, you could use the route definition shown in Example 34.3, "Unmarshalling a Java Object" . Example 34.3. Unmarshalling a Java Object Final message format When an In message reaches the end of a route, the target endpoint must be able to convert the message body into a format that can be written to the physical endpoint. The same rule applies to Out messages that arrive back at the source endpoint. This conversion is usually performed implicitly, using the Apache Camel type converter. Typically, this involves converting from a low-level format to another low-level format, such as converting from a byte[] array to an InputStream type. 34.3. Built-In Type Converters Overview This section describes the conversions supported by the master type converter. These conversions are built into the Apache Camel core. Usually, the type converter is called through convenience functions, such as Message.getBody(Class<T> type) or Message.getHeader(String name, Class<T> type) . It is also possible to invoke the master type converter directly. For example, if you have an exchange object, exchange , you could convert a given value to a String as shown in Example 34.4, "Converting a Value to a String" . Example 34.4. Converting a Value to a String Basic type converters Apache Camel provides built-in type converters that perform conversions to and from the following basic types: java.io.File String byte[] and java.nio.ByteBuffer java.io.InputStream and java.io.OutputStream java.io.Reader and java.io.Writer java.io.BufferedReader and java.io.BufferedWriter java.io.StringReader However, not all of these types are inter-convertible. The built-in converter is mainly focused on providing conversions from the File and String types. The File type can be converted to any of the preceding types, except Reader , Writer , and StringReader . The String type can be converted to File , byte[] , ByteBuffer , InputStream , or StringReader . The conversion from String to File works by interpreting the string as a file name. The trio of String , byte[] , and ByteBuffer are completely inter-convertible. Note You can explicitly specify which character encoding to use for conversion from byte[] to String and from String to byte[] by setting the Exchange.CHARSET_NAME exchange property in the current exchange. For example, to perform conversions using the UTF-8 character encoding, call exchange.setProperty("Exchange.CHARSET_NAME", "UTF-8") . The supported character sets are described in the java.nio.charset.Charset class. Collection type converters Apache Camel provides built-in type converters that perform conversions to and from the following collection types: Object[] java.util.Set java.util.List All permutations of conversions between the preceding collection types are supported. Map type converters Apache Camel provides built-in type converters that perform conversions to and from the following map types: java.util.Map java.util.HashMap java.util.Hashtable java.util.Properties The preceding map types can also be converted into a set, of java.util.Set type, where the set elements are of the MapEntry<K,V> type. DOM type converters You can perform type conversions to the following Document Object Model (DOM) types: org.w3c.dom.Document - convertible from byte[] , String , java.io.File , and java.io.InputStream . org.w3c.dom.Node javax.xml.transform.dom.DOMSource - convertible from String . javax.xml.transform.Source - convertible from byte[] and String . All permutations of conversions between the preceding DOM types are supported. SAX type converters You can also perform conversions to the javax.xml.transform.sax.SAXSource type, which supports the SAX event-driven XML parser (see the SAX Web site for details). You can convert to SAXSource from the following types: String InputStream Source StreamSource DOMSource enum type converter Camel provides a type converter for performing String to enum type conversions, where the string value is converted to the matching enum constant from the specified enumeration class (the matching is case-insensitive ). This type converter is rarely needed for converting message bodies, but it is frequently used internally by Apache Camel to select particular options. For example, when setting the logging level option, the following value, INFO , is converted into an enum constant: Because the enum type converter is case-insensitive, any of the following alternatives would also work: Custom type converters Apache Camel also enables you to implement your own custom type converters. For details on how to implement a custom type converter, see Chapter 36, Type Converters . 34.4. Built-In UUID Generators Overview Apache Camel enables you to register a UUID generator in the CamelContext . This UUID generator is then used whenever Apache Camel needs to generate a unique ID - in particular, the registered UUID generator is called to generate the IDs returned by the Exchange.getExchangeId() and the Message.getMessageId() methods. For example, you might prefer to replace the default UUID generator, if part of your application does not support IDs with a length of 36 characters (like Websphere MQ). Also, it can be convenient to generate IDs using a simple counter (see the SimpleUuidGenerator ) for testing purposes. Provided UUID generators You can configure Apache Camel to use one of the following UUID generators, which are provided in the core: org.apache.camel.impl.ActiveMQUuidGenerator - (Default) generates the same style of ID as is used by Apache ActiveMQ. This implementation might not be suitable for all applications, because it uses some JDK APIs that are forbidden in the context of cloud computing (such as the Google App Engine). org.apache.camel.impl.SimpleUuidGenerator - implements a simple counter ID, starting at 1 . The underlying implementation uses the java.util.concurrent.atomic.AtomicLong type, so that it is thread-safe. org.apache.camel.impl.JavaUuidGenerator - implements an ID based on the java.util.UUID type. Because java.util.UUID is synchronized, this might affect performance on some highly concurrent systems. Custom UUID generator To implement a custom UUID generator, implement the org.apache.camel.spi.UuidGenerator interface, which is shown in Example 34.5, "UuidGenerator Interface" . The generateUuid() must be implemented to return a unique ID string. Example 34.5. UuidGenerator Interface Specifying the UUID generator using Java To replace the default UUID generator using Java, call the setUuidGenerator() method on the current CamelContext object. For example, you can register a SimpleUuidGenerator instance with the current CamelContext , as follows: Note The setUuidGenerator() method should be called during startup, before any routes are activated. Specifying the UUID generator using Spring To replace the default UUID generator using Spring, all you need to do is to create an instance of a UUID generator using the Spring bean element. When a camelContext instance is created, it automatically looks up the Spring registry, searching for a bean that implements org.apache.camel.spi.UuidGenerator . For example, you can register a SimpleUuidGenerator instance with the CamelContext as follows: [1] If there is no active method the returned value will be null .
[ "// Access the In message Message getIn(); void setIn(Message in); // Access the Out message (if any) Message getOut(); void setOut(Message out); boolean hasOut(); // Access the exchange ID String getExchangeId(); void setExchangeId(String id);", "// Access the message body Object getBody(); <T> T getBody(Class<T> type); void setBody(Object body); <T> void setBody(Object body, Class<T> type); // Access message headers Object getHeader(String name); <T> T getHeader(String name, Class<T> type); void setHeader(String name, Object value); Object removeHeader(String name); Map<String, Object> getHeaders(); void setHeaders(Map<String, Object> headers); // Access message attachments javax.activation.DataHandler getAttachment(String id); java.util.Map<String, javax.activation.DataHandler> getAttachments(); java.util.Set<String> getAttachmentNames(); void addAttachment(String id, javax.activation.DataHandler content) // Access the message ID String getMessageId(); void setMessageId(String messageId);", "from(\" SourceURL \") .filter(header(\"foo\") .isEqualTo(\"bar\")) .to(\" TargetURL \");", "from(\" SourceURL \").convertBodyTo(String.class).to(\" TargetURL \");", "from(\" SourceURL \").setBody(bodyAs(String.class).append(\"My Special Signature\")).to(\" TargetURL \");", "from(\" SourceURL \").setBody(body().append(\"My Special Signature\")).to(\" TargetURL \");", "from(\"file://tmp/appfiles/serialized\") .unmarshal() .serialization() . <FurtherProcessing> .to(\" TargetURL \");", "org.apache.camel.TypeConverter tc = exchange.getContext().getTypeConverter(); String str_value = tc.convertTo(String.class, value);", "<to uri=\"log:foo?level=INFO\"/>", "<to uri=\"log:foo?level=info\"/> <to uri=\"log:foo?level=INfo\"/> <to uri=\"log:foo?level=InFo\"/>", "// Java package org.apache.camel.spi; /** * Generator to generate UUID strings. */ public interface UuidGenerator { String generateUuid(); }", "// Java getContext().setUuidGenerator(new org.apache.camel.impl.SimpleUuidGenerator());", "<beans ...> <bean id=\"simpleUuidGenerator\" class=\"org.apache.camel.impl.SimpleUuidGenerator\" /> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> </camelContext> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/MsgFormats
Appendix A. Tool Reference
Appendix A. Tool Reference This appendix provides a quick reference for the various tools in Red Hat Enterprise Linux 7 that can be used to tweak performance. See the relevant man page for your tool for complete, up-to-date, detailed reference material. A.1. irqbalance irqbalance is a command line tool that distributes hardware interrupts across processors to improve system performance. It runs as a daemon by default, but can be run once only with the --oneshot option. The following parameters are useful for improving performance. --powerthresh Sets the number of CPUs that can idle before a CPU is placed into powersave mode. If more CPUs than the threshold are more than 1 standard deviation below the average softirq workload and no CPUs are more than one standard deviation above the average, and have more than one irq assigned to them, a CPU is placed into powersave mode. In powersave mode, a CPU is not part of irq balancing so that it is not woken unnecessarily. --hintpolicy Determines how irq kernel affinity hinting is handled. Valid values are exact ( irq affinity hint is always applied), subset ( irq is balanced, but the assigned object is a subset of the affinity hint), or ignore ( irq affinity hint is ignored completely). --policyscript Defines the location of a script to execute for each interrupt request, with the device path and irq number passed as arguments, and a zero exit code expected by irqbalance . The script defined can specify zero or more key value pairs to guide irqbalance in managing the passed irq . The following are recognized as valid key value pairs. ban Valid values are true (exclude the passed irq from balancing) or false (perform balancing on this irq ). balance_level Allows user override of the balance level of the passed irq . By default the balance level is based on the PCI device class of the device that owns the irq . Valid values are none , package , cache , or core . numa_node Allows user override of the NUMA node that is considered local to the passed irq . If information about the local node is not specified in ACPI, devices are considered equidistant from all nodes. Valid values are integers (starting from 0) that identify a specific NUMA node, and -1 , which specifies that an irq should be considered equidistant from all nodes. --banirq The interrupt with the specified interrupt request number is added to the list of banned interrupts. You can also use the IRQBALANCE_BANNED_CPUS environment variable to specify a mask of CPUs that are ignored by irqbalance . For further details, see the man page:
[ "man irqbalance" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/appe-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference
8.164. python-urwid
8.164. python-urwid 8.164.1. RHBA-2013:1550 - python-urwid bug fix and enhancement update Updated python-urwid packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The python-urwid package provides a library for development of text user interface applications in the Python programming environment. Note The python-urwid packages have been upgraded to upstream version 1.1.1, which provides a number of bug fixes and enhancements over the version. Among other changes, this update resolves a number of incompatibilities with the version of python-urwid used in Red Hat Enterprise Linux 6. These incompatibilities posed a problem for Red Hat Enterprise Virtualization Hypervisor that requires the python-urwid packages for its new user interface. (BZ# 970981 ) Users of python-urwid are advised to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/python-urwid
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.14 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters using the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed, both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.16. MTC 1.8 only supports migrations from OpenShift Container Platform 4.14 and later. Table 7.1. MTC compatibility: Migrating from OpenShift Container Platform 3 to 4 Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.14 or later Stable MTC version MTC v.1.7. z MTC v.1.8. z Installation As described in this guide Install with OLM, release channel release-v1.8 Edge cases exist where network restrictions prevent OpenShift Container Platform 4 clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a OpenShift Container Platform 4 cluster in the cloud, the OpenShift Container Platform 4 cluster might have trouble connecting to the OpenShift Container Platform 3.11 cluster. In this case, it is possible to designate the OpenShift Container Platform 3.11 cluster as the control cluster and push workloads to the remote OpenShift Container Platform 4 cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.14 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.14 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.14. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.14, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 7.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 7.5.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migrating_from_version_3_to_4/installing-restricted-3-4
7.12. autofs
7.12. autofs 7.12.1. RHSA-2015:1344 - Moderate: autofs security and bug fix update Updated autofs packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link in the References section. The autofs utility controls the operation of the automount daemon. The daemon automatically mounts file systems when in use and unmounts them when they are not busy. Security Fix CVE-2014-8169 It was found that program-based automounter maps that used interpreted languages such as Python would use standard environment variables to locate and load modules of those languages. A local attacker could potentially use this flaw to escalate their privileges on the system. Note This issue has been fixed by adding the "AUTOFS_" prefix to the affected environment variables so that they are not used to subvert the system. A configuration option ("force_standard_program_map_env") to override this prefix and to use the environment variables without the prefix has been added. In addition, warnings have been added to the manual page and to the installed configuration file. Now, by default the standard variables of the program map are provided only with the prefix added to its name. Red Hat would like to thank the Georgia Institute of Technology for reporting this issue. Bug Fixes BZ# 1163957 If the "ls *" command was executed before a valid mount, the autofs program failed on further mount attempts inside the mount point, whether the mount point was valid or not. While attempting to mount, the "ls *" command of the root directory of an indirect mount was executed, which led to an attempt to mount "*", causing it to be added to the negative map entry cache. This bug has been fixed by checking for and not adding "*" while updating the negative map entry cache. BZ# 1124083 The autofs program by design did not mount host map entries that were duplicate exports in an NFS server export list. The duplicate entries in a multi-mount map entry were recognized as a syntax error and autofs refused to perform mounts when the duplicate entries occurred. Now, autofs has been changed to continue mounting the last seen instance of the duplicate entry rather than fail, and to report the problem in the log files to alert the system administrator. BZ# 1153130 The autofs program did not recognize the yp map type in the master map. This was caused by another change in the master map parser to fix a problem with detecting the map format associated with mapping the type in the master map. The change led to an incorrect length for the type comparison of yp maps that resulted in a match operation failure. This bug has been fixed by correcting the length which is used for the comparison. BZ# 1156387 The autofs program did not update the export list of the Sun-format maps of the network shares exported from an NFS server. This happened due to a change of the Sun-format map parser leading to the hosts map update to stop working on the map re-read operation. The bug has been now fixed by selectively preventing this type of update only for the Sun-formatted maps. The updates of the export list on the Sun-format maps are now visible and refreshing of the export list is no longer supported for the Sun-formatted hosts map. BZ# 1175671 Within changes made for adding of the Sun-format maps, an incorrect check was added that caused a segmentation fault in the Sun-format map parser in certain circumstances. This has been now fixed by analyzing the intent of the incorrect check and changing it in order to properly identify the conditions without causing a fault. BZ# 1201195 A bug in the autofs program map lookup module caused an incorrect map format type comparison. The incorrect comparison affected the Sun-format program maps where it led to the unused macro definitions. The bug in the comparison has been fixed so that the macro definitions are not present for the Sun-format program maps. Users of autofs are advised to upgrade to these updated packages, which contain backported patches to correct these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-autofs
8.9. pNFS
8.9. pNFS Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks. Note The protocol allows for three possible pNFS layout types: files, objects, and blocks. While the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat Enterprise Linux 7 supports the files layout type, with objects and blocks layout types being included as a technology preview. pNFS Flex Files Flexible Files is a new layout for pNFS that enables the aggregation of standalone NFSv3 and NFSv4 servers into a scale out name space. The Flex Files feature is part of the NFSv4.2 standard as described in the RFC 7862 specification. Red Hat Enterprise Linux can mount NFS shares from Flex Files servers since Red Hat Enterprise Linux 7.4. Mounting pNFS Shares To enable pNFS functionality, mount shares from a pNFS-enabled server with NFS version 4.1 or later: After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. The mount entry in the output should contain minorversion=1 . Use the following command to verify the module was loaded: To mount an NFS share with the Flex Files feature from a server that supports Flex Files, use NFS version 4.2 or later: Verify that the nfs_layout_flexfiles module has been loaded: Additional Resources For more information on pNFS, refer to: http://www.pnfs.com .
[ "mount -t nfs -o v4.1 server:/remote-export /local-directory", "lsmod | grep nfs_layout_nfsv41_files", "mount -t nfs -o v4.2 server:/remote-export /local-directory", "lsmod | grep nfs_layout_flexfiles" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/nfs-pnfs
Using the Streams for Apache Kafka Console
Using the Streams for Apache Kafka Console Red Hat Streams for Apache Kafka 2.9 The Streams for Apache Kafka Console supports your deployment of Streams for Apache Kafka.
[ "export NAMESPACE=kafka 1 export LISTENER_TYPE=route 2 export CLUSTER_DOMAIN=<domain_name> 3", "cat examples/console/resources/kafka/*.yaml | envsubst | kubectl apply -n USD{NAMESPACE} -f -", "get pods -n kafka", "NAME READY STATUS RESTARTS strimzi-cluster-operator 1/1 Running 0 console-kafka-console-nodepool-0 1/1 Running 0 console-kafka-console-nodepool-1 1/1 Running 0 console-kafka-console-nodepool-2 1/1 Running 0", "export NAMESPACE=operator-namespace", "cat install/console-operator/olm/*.yaml | envsubst | kubectl apply -n USD{NAMESPACE} -f -", "get pods -n operator-namespace", "NAME READY STATUS RESTARTS console-operator 1/1 Running 1", "export NAMESPACE=operator-namespace", "cat install/console-operator/non-olm/console-operator.yaml | envsubst | kubectl apply -n USD{NAMESPACE} -f -", "get pods -n operator-namespace", "NAME READY STATUS RESTARTS console-operator 1/1 Running 1", "apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> 1 kafkaClusters: - name: console-kafka 2 namespace: kafka 3 listener: secure 4 properties: values: [] 5 valuesFrom: [] 6 credentials: kafkaUser: name: console-kafka-user1 7", "apply -f examples/console/resources/console/010-Console-example.yaml -n console-namespace", "get pods -n console-namespace", "NAME READY STATUS RUNNING console-kafka 1/1 1 1", "apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> security: oidc: authServerUrl: <OIDC_discovery_URL> 1 clientId: <client_id> 2 clientSecret: 3 valueFrom: secretKeyRef: name: my-oidc-secret key: client-secret subjects: - claim: groups 4 include: 5 - <team_name_1> - <team_name_2> roleNames: 6 - developers - claim: groups include: - <team_name_3> roleNames: - administrators - include: 7 - <user_1> - <user_2> roleNames: - administrators roles: - name: developers rules: - resources: 8 - kafkas - resourceNames: 9 - <dev_cluster_a> - <dev_cluster_b> - privileges: 10 - '*' - name: administrators rules: - resources: - kafkas - privileges: - '*' kafkaClusters: - name: console-kafka namespace: kafka listener: secure credentials: kafkaUser: name: console-kafka-user1", "apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> # kafkaClusters: - name: console-kafka namespace: kafka listener: secure credentials: kafkaUser: name: console-kafka-user1 security: roles: - name: developers rules: - resources: - topics - topics/records - consumerGroups - rebalances - privileges: - get - list - name: administrators rules: - resources: - topics - topics/records - consumerGroups - rebalances - nodes/configs - privileges: - get - list - resources: - consumerGroups - rebalances - privileges: - update", "apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> # metricsSources: - name: my-ocp-prometheus type: openshift-monitoring kafkaClusters: - name: console-kafka namespace: kafka listener: secure metricsSource: my-ocp-prometheus credentials: kafkaUser: name: console-kafka-user1 #", "apiVersion: console.streamshub.github.com/v1alpha1 kind: Console metadata: name: my-console spec: hostname: my-console.<cluster_domain> # metricsSources: - name: my-custom-prometheus type: standalone url: <prometheus_instance_address> 1 authentication: 2 username: my-user password: my-password trustStore: 3 type: JKS content: valueFrom: configMapKeyRef: name: my-prometheus-configmap key: ca.jks password: 4 value: changeit kafkaClusters: - name: console-kafka namespace: kafka listener: secure metricsSource: my-ocp-prometheus credentials: kafkaUser: name: console-kafka-user1 #", "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html-single/using_the_streams_for_apache_kafka_console/index
Chapter 7. User Defined Functions
Chapter 7. User Defined Functions 7.1. User Defined Functions You can extend the Red Hat JBoss Data Virtualization function library by creating User Defined Functions (UDFs), as well as User Defined Aggregate Functions (UDAFs). The following are used to define a UDF: Function Name - When you create the function name, keep these requirements in mind: You cannot overload existing Red Hat JBoss Data Virtualization functions. The function name must be unique among user-defined functions in its model for the number of arguments. You can use the same function name for different numbers of types of arguments. Hence, you can overload your user-defined functions. The function name cannot contain the '.' character. The function name cannot exceed 255 characters. Input Parameters - defines a type specific signature list. All arguments are considered required. Return Type - the expected type of the returned scalar value. Pushdown - can be one of REQUIRED, NEVER, ALLOWED. Indicates the expected pushdown behavior. If NEVER or ALLOWED are specified then a Java implementation of the function should be supplied. If REQUIRED is used, then user must extend the Translator for the source and add this function to its pushdown function library. invocationClass/invocationMethod - optional properties indicating the static method to invoke when the UDF is not pushed down. Deterministic - if the method will always return the same result for the same input parameters. Defaults to false. It is important to mark the function as deterministic if it returns the same value for the same inputs as this will lead to better performance. See also the Relational extension boolean metadata property "deterministic" and the DDL OPTION property "determinism". Note If using the pushdown UDF in Teiid Designer, the user must create a source function on the source model, so that the parsing will work correctly. Pushdown scalar functions differ from normal user-defined functions in that no code is provided for evaluation in the engine. An exception will be raised if a pushdown required function cannot be evaluated by the appropriate source.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-User_Defined_Functions
Chapter 17. Performing latency tests for platform verification
Chapter 17. Performing latency tests for platform verification You can use the Cloud-native Network Functions (CNF) tests image to run latency tests on a CNF-enabled OpenShift Container Platform cluster, where all the components required for running CNF workloads are installed. Run the latency tests to validate node tuning for your workload. The cnf-tests container image is available at registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 . 17.1. Prerequisites for running latency tests Your cluster must meet the following requirements before you can run the latency tests: You have applied all the required CNF configurations. This includes the PerformanceProfile cluster and other configuration according to the reference design specifications (RDS) or your specific requirements. You have logged in to registry.redhat.io with your Customer Portal credentials by using the podman login command. Additional resources Scheduling a workload onto a worker with real-time capabilities 17.2. Measuring latency The cnf-tests image uses three tools to measure the latency of the system: hwlatdetect cyclictest oslat Each tool has a specific use. Use the tools in sequence to achieve reliable test results. hwlatdetect Measures the baseline that the bare-metal hardware can achieve. Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold because you cannot fix hardware latency spikes by operating system tuning. cyclictest Verifies the real-time kernel scheduler latency after hwlatdetect passes validation. The cyclictest tool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel. oslat Behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing. The tests introduce the following environment variables: Table 17.1. Latency test environment variables Environment variables Description LATENCY_TEST_DELAY Specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0. LATENCY_TEST_CPUS Specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs. LATENCY_TEST_RUNTIME Specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds. Note To prevent the Ginkgo 2.0 test suite from timing out before the latency tests complete, set the -ginkgo.timeout flag to a value greater than LATENCY_TEST_RUNTIME + 2 minutes. If you also set a LATENCY_TEST_DELAY value then you must set -ginkgo.timeout to a value greater than LATENCY_TEST_RUNTIME + LATENCY_TEST_DELAY + 2 minutes. The default timeout value for the Ginkgo 2.0 test suite is 1 hour. HWLATDETECT_MAXIMUM_LATENCY Specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of HWLATDETECT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool compares the default expected threshold (20ms) and the actual maximum latency in the tool itself. Then, the test fails or succeeds accordingly. CYCLICTEST_MAXIMUM_LATENCY Specifies the maximum latency in microseconds that all threads expect before waking up during the cyclictest run. If you do not set the value of CYCLICTEST_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. OSLAT_MAXIMUM_LATENCY Specifies the maximum acceptable latency in microseconds for the oslat test results. If you do not set the value of OSLAT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. MAXIMUM_LATENCY Unified variable that specifies the maximum acceptable latency in microseconds. Applicable for all available latency tools. Note Variables that are specific to a latency tool take precedence over unified variables. For example, if OSLAT_MAXIMUM_LATENCY is set to 30 microseconds and MAXIMUM_LATENCY is set to 10 microseconds, the oslat test will run with maximum acceptable latency of 30 microseconds. 17.3. Running the latency tests Run the cluster latency tests to validate node tuning for your Cloud-native Network Functions (CNF) workload. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. This procedure runs the three individual tests hwlatdetect , cyclictest , and oslat . For details on these individual tests, see their individual sections. Procedure Open a shell prompt in the directory containing the kubeconfig file. You provide the test image with a kubeconfig file in current directory and its related USDKUBECONFIG environment variable, mounted through a volume. This allows the running container to use the kubeconfig file from inside the container. Note In the following command, your local kubeconfig is mounted to kubeconfig/kubeconfig in the cnf-tests container, which allows access to the cluster. To run the latency tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600\ -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh \ --ginkgo.v --ginkgo.timeout="24h" The LATENCY_TEST_RUNTIME is shown in seconds, in this case 600 seconds (10 minutes). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Optional: Append --ginkgo.dry-run flag to run the latency tests in dry-run mode. This is useful for checking what commands the tests run. Optional: Append --ginkgo.v flag to run the tests with increased verbosity. Optional: Append --ginkgo.timeout="24h" flag to ensure the Ginkgo 2.0 test suite does not timeout before the latency tests complete. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). 17.3.1. Running hwlatdetect The hwlatdetect tool is available in the rt-kernel package with a regular subscription of Red Hat Enterprise Linux (RHEL) 9.x. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To run the hwlatdetect tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --ginkgo.focus="hwlatdetect" --ginkgo.v --ginkgo.timeout="24h" The hwlatdetect test runs for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 3 specs [...] โ€ข Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (366.08s) FAIL 1 You can configure the latency threshold by using the MAXIMUM_LATENCY or the HWLATDETECT_MAXIMUM_LATENCY environment variables. 2 The maximum latency value measured during the test. Example hwlatdetect test results You can capture the following types of results: Rough results that are gathered after each run to create a history of impact on any changes made throughout the test. The combined set of the rough tests with the best results and configuration settings. Example of good results hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 The hwlatdetect tool only provides output if the sample exceeds the specified threshold. Example of bad results hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63 The output of hwlatdetect shows that multiple samples exceed the threshold. However, the same output can indicate different results based on the following factors: The duration of the test The number of CPU cores The host firmware settings Warning Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the system vendor support. Not all latency spikes are hardware related. Ensure that you tune the host firmware to meet your workload requirements. For more information, see Setting firmware parameters for system tuning . 17.3.2. Running cyclictest The cyclictest tool measures the real-time kernel scheduler latency on the specified CPUs. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To perform the cyclictest , run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --ginkgo.focus="cyclictest" --ginkgo.v --ginkgo.timeout="24h" The command runs the cyclictest tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (in this example, 20 ms). Latency spikes of 20 ms and above are generally not acceptable for telco RAN workloads. If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 3 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.48s) FAIL Example cyclictest results The same output can indicate different results for different workloads. For example, spikes up to 18ms are acceptable for 4G DU workloads, but not for 5G DU workloads. Example of good results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries ... # Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 # Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 # Histogram Overflow at cycle number: # Thread 0: # Thread 1: # Thread 2: # Thread 3: # Thread 4: # Thread 5: # Thread 6: # Thread 7: # Thread 8: # Thread 9: # Thread 10: # Thread 11: # Thread 12: # Thread 13: # Thread 14: # Thread 15: Example of bad results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries ... # Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 # Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 # Histogram Overflow at cycle number: # Thread 0: 155922 # Thread 1: 110064 # Thread 2: 110064 # Thread 3: 110063 155921 # Thread 4: 110063 155921 # Thread 5: 155920 # Thread 6: # Thread 7: 110062 # Thread 8: 110062 # Thread 9: 155919 # Thread 10: 110061 155919 # Thread 11: 155918 # Thread 12: 155918 # Thread 13: 110060 # Thread 14: 110060 # Thread 15: 110059 155917 17.3.3. Running oslat The oslat test simulates a CPU-intensive DPDK application and measures all the interruptions and disruptions to test how the cluster handles CPU heavy data processing. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To perform the oslat test, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --ginkgo.focus="oslat" --ginkgo.v --ginkgo.timeout="24h" LATENCY_TEST_CPUS specifies the number of CPUs to test with the oslat command. The command runs the oslat tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 3 specs [...] โ€ข Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.42s) FAIL 1 In this example, the measured latency is outside the maximum allowed value. 17.4. Generating a latency test failure report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a test failure report with information about the cluster state and resources for troubleshooting by passing the --report parameter with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v where: <report_folder_path> Is the path to the folder where the report is generated. 17.5. Generating a JUnit latency test report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a JUnit-compliant XML report by passing the --junit parameter together with the path to where the report is dumped: Note You must create the junit folder before running this command. USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junit:/junit \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --ginkgo.junit-report junit/<file-name>.xml --ginkgo.v where: junit Is the folder where the junit report is stored. 17.6. Running latency tests on a single-node OpenShift cluster You can run latency tests on single-node OpenShift clusters. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have applied a cluster performance profile by using the Node Tuning Operator. Procedure To run the latency tests on a single-node OpenShift cluster, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Note The default runtime for each test is 300 seconds. For valid latency test results, run the tests for at least 12 hours by updating the LATENCY_TEST_RUNTIME variable. To run the buckets latency validation step, you must specify a maximum latency. For details on maximum latency variables, see the table in the "Measuring latency" section. After running the test suite, all the dangling resources are cleaned up. 17.7. Running latency tests in a disconnected cluster The CNF tests image can run tests in a disconnected cluster that is not able to reach external registries. This requires two steps: Mirroring the cnf-tests image to the custom disconnected registry. Instructing the tests to consume the images from the custom disconnected registry. Mirroring the images to a custom registry accessible from the cluster A mirror executable is shipped in the image to provide the input required by oc to mirror the test image to a local registry. Run this command from an intermediate machine that has access to the cluster and registry.redhat.io : USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f - where: <disconnected_registry> Is the disconnected mirror registry you have configured, for example, my.local.registry:5000/ . When you have mirrored the cnf-tests image into the disconnected registry, you must override the original registry used to fetch the images when running the tests, for example: podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel8:v4.17" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ <disconnected_registry>/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Configuring the tests to consume images from a custom registry You can run the latency tests using a custom test image and image registry using CNF_TESTS_IMAGE and IMAGE_REGISTRY variables. To configure the latency tests to use a custom test image and image registry, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" where: <custom_image_registry> is the custom image registry, for example, custom.registry:5000/ . <custom_cnf-tests_image> is the custom cnf-tests image, for example, custom-cnf-tests-image:latest . Mirroring images to the cluster OpenShift image registry OpenShift Container Platform provides a built-in container image registry, which runs as a standard workload on the cluster. Procedure Gain external access to the registry by exposing it with a route: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Fetch the registry endpoint by running the following command: USD REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Create a namespace for exposing the images: USD oc create ns cnftests Make the image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the cnf-tests image stream. Run the following commands: USD oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests USD oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests Retrieve the docker secret name and auth token by running the following commands: USD SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'} USD TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth') Create a dockerauth.json file, for example: USD echo "{\"auths\": { \"USDREGISTRY\": { \"auth\": USDTOKEN } }}" > dockerauth.json Do the image mirroring: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:4.17 \ /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true \ -a=USD(pwd)/dockerauth.json -f - Run the tests: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Mirroring a different set of test images You can optionally change the default upstream images that are mirrored for the latency tests. Procedure The mirror command tries to mirror the upstream images by default. This can be overridden by passing a file with the following format to the image: [ { "registry": "public.registry.io:5000", "image": "imageforcnftests:4.17" } ] Pass the file to the mirror command, for example saving it locally as images.json . With the following command, the local path is mounted in /kubeconfig inside the container and that can be passed to the mirror command. USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f - 17.8. Troubleshooting errors with the cnf-tests container To run latency tests, the cluster must be accessible from within the cnf-tests container. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Verify that the cluster is accessible from inside the cnf-tests container by running the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 \ oc get nodes If this command does not work, an error related to spanning across DNS, MTU size, or firewall access might be occurring.
[ "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.focus=\"hwlatdetect\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 3 specs [...] โ€ข Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.focus=\"cyclictest\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 3 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.focus=\"oslat\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 3 specs [...] โ€ข Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junit:/junit -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.junit-report junit/<file-name>.xml --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.17\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> <disconnected_registry>/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.17 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.17\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17 get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/cnf-performing-platform-verification-latency-tests
function::ip_ntop
function::ip_ntop Name function::ip_ntop - Returns a string representation for an IPv4 address Synopsis Arguments addr the IPv4 address represented as an integer
[ "ip_ntop:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ip-ntop
Chapter 2. Known issues
Chapter 2. Known issues ReadyToRun , which is enabled on a source-to-image build via the DOTNET_PUBLISH_READYTORUN environment variable, is not supported on s390x . The build will print a warning and ignore this variable. The SDK image contains nodeJS to support building JavaScript front-ends in tandem with the .NET backend. Some JavaScript web front-ends cannot be built on s390x and aarch64 due to missing nodeJS packages. .NET 6.0 on s390x does not understand memory and cpu limits in containers. In such environments, it is possible that .NET 6.0 will try to use more memory than allocated to the container, causing the container to get killed or restarted in OpenShift Container Platform. As a workaround you can manually specify a heap limit through an environment variable: MONO_GC_PARAMS=max-heap-size=<limit> . You should set the limit to 75% of the memory allocated to the container. For example, if the container memory limit is 300MB, set MONO_GC_PARAMS=max-heap-size=225M . See Known Issues in the .NET 6.0 Release Notes for .NET 6.0 RPM packages for a list of known issues and workarounds for RPMs.
null
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_containers/known-issues-containers_release-notes-for-dotnet-containers
3.7. Managing Subject Names and Subject Alternative Names
3.7. Managing Subject Names and Subject Alternative Names The subject name of a certificate is a distinguished name (DN) that contains identifying information about the entity to which the certificate is issued. This subject name can be built from standard LDAP directory components, such as common names and organizational units. These components are defined in X.500. In addition to - or even in place of - the subject name, the certificate can have a subject alternative name , which is a kind of extension set for the certificate that includes additional information that is not defined in X.500. The naming components for both subject names and subject alternative names can be customized. Important If the subject name is empty, then the Subject Alternative Name extension must be present and marked critical. 3.7.1. Using the Requester CN or UID in the Subject Name The cn or uid value from a certificate request can be used to build the subject name of the issued certificate. This section demonstrates a profile that requires the naming attribute (CN or UID) being specified in the Subject Name Constraint to be present in the certificate request. If the naming attribute is missing, the request is rejected. There are two parts to this configuration: The CN or UID format is set in the pattern configuration in the Subject Name Constraint. The format of the subject DN, including the CN or UID token and the specific suffix for the certificate, is set in the Subject Name Default. For example, to use the CN in the subject DN: In this example, if a request comes in with the CN of cn=John Smith , then the certificate will be issued with a subject DN of cn=John Smith,DC=example, DC=com . If the request comes in but it has a UID of uid=jsmith and no CN, then the request is rejected. The same configuration is used to pull the requester UID into the subject DN: The format for the pattern parameter is covered in Section B.2.11, "Subject Name Constraint" and Section B.1.27, "Subject Name Default" . 3.7.2. Inserting LDAP Directory Attribute Values and Other Information into the Subject Alt Name Information from an LDAP directory or that was submitted by the requester can be inserted into the subject alternative name of the certificate by using matching variables in the Subject Alt Name Extension Default configuration. This default sets the type (format) of information and then the matching pattern (variable) to use to retrieve the information. For example: This inserts the requester's email as the first CN component in the subject alt name. To use additional components, increment the Type_ , Pattern_ , and Enable_ values numerically, such as Type_1 . Configuring the subject alt name is detailed in Section B.1.23, "Subject Alternative Name Extension Default" , as well. To insert LDAP components into the subject alt name of the certificate: Inserting LDAP attribute values requires enabling the user directory authentication plug-in, SharedSecret . Open the CA Console. Select Authentication in the left navigation tree. In the Authentication Instance tab, click Add , and add an instance of the SharedSecret authentication plug-in. Enter the following information: Save the new plug-in instance. Note pkiconsole is being deprecated. For information on setting a CMC shared token, see Section 10.4.2, "Setting a CMC Shared Secret" . The ldapStringAttributes parameter instructs the authentication plug-in to read the value of the mail attribute from the user's LDAP entry and put that value in the certificate request. When the value is in the request, the certificate profile policy can be set to insert that value for an extension value. The format for the dnpattern parameter is covered in Section B.2.11, "Subject Name Constraint" and Section B.1.27, "Subject Name Default" . To enable the CA to insert the LDAP attribute value in the certificate extension, edit the profile's configuration file, and insert a policy set parameter for an extension. For example, to insert the mail attribute value in the Subject Alternative Name extension in the caFullCMCSharedTokenCert profile, change the following code: For more details about editing a profile, see Section 3.2.1.3, "Editing a Certificate Profile in Raw Format" . Restart the CA. For this example, certificates submitted through the caFullCMCSharedTokenCert profile enrollment form will have the Subject Alternative Name extension added with the value of the requester's mail LDAP attribute. For example: There are many attributes which can be automatically inserted into certificates by being set as a token ( USDXUSD ) in any of the Pattern_ parameters in the policy set. The common tokens are listed in Table 3.1, "Variables Used to Populate Certificates" , and the default profiles contain examples for how these tokens are used. Table 3.1. Variables Used to Populate Certificates Policy Set Token Description USDrequest.auth_token.cn[0]USD The LDAP common name ( cn ) attribute of the user who requested the certificate. USDrequest.auth_token.mail[0]USD The value of the LDAP email ( mail ) attribute of the user who requested the certificate. USDrequest.auth_token.tokencertsubjectUSD The certificate subject name. USDrequest.auth_token.uidUSD The LDAP user ID ( uid ) attribute of the user who requested the certificate. USDrequest.auth_token.userdnUSD The user DN of the user who requested the certificate. USDrequest.auth_token.useridUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.uidUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.requestor_emailUSD The email address of the person who submitted the request. USDrequest.requestor_nameUSD The person who submitted the request. USDrequest.upnUSD The Microsoft UPN. This has the format (UTF8String)1.3.6.1.4.1.311.20.2.3,USDrequest.upnUSD . USDserver.sourceUSD Instructs the server to generate a version 4 UUID (random number) component in the subject name. This always has the format (IA5String)1.2.3.4,USDserver.sourceUSD . USDrequest.auth_token.userUSD Used when the request was submitted by TPS. The TPS subsystem trusted manager who requested the certificate. USDrequest.subjectUSD Used when the request was submitted by TPS. The subject name DN of the entity to which TPS has resolved and requested for. For example, cn=John.Smith.123456789,o=TMS Org 3.7.3. Using the CN Attribute in the SAN Extension Several client applications and libraries no longer support using the Common Name (CN) attribute of the Subject DN for domain name validation, which has been deprecated in RFC 2818 . Instead, these applications and libraries use the dNSName Subject Alternative Name (SAN) value in the certificate request. Certificate System copies the CN only if it matches the preferred name syntax according to RFC 1034 Section 3.5 and has more than one component. Additionally, existing SAN values are preserved. For example, the dNSName value based on the CN is appended to existing SANs. To configure Certificate System to automatically use the CN attribute in the SAN extension, edit the certificate profile used to issue the certificates. For example: Disable the profile: Edit the profile: Add the following configuration with a unique set number for the profile. For example: The example uses 12 as the set number. Append the new policy set number to the policyset.userCertSet.list parameter. For example: Save the profile. Enable the profile: Note All default server profiles contain the commonNameToSANDefaultImpl default. 3.7.4. Accepting SAN Extensions from a CSR In certain environments, administrators want to allow specifying Subject Alternative Name (SAN) extensions in Certificate Signing Request (CSR). 3.7.4.1. Configuring a Profile to Retrieve SANs from a CSR To allow retrieving SANs from a CSR, use the User Extension Default. For details, see Section B.1.32, "User Supplied Extension Default" . Note A SAN extension can contain one or more SANs. To accept SANs from a CSR, add the following default and constraint to a profile, such as caCMCECserverCert : 3.7.4.2. Generating a CSR with SANs For example, to generate a CSR with two SANs using the certutil utility: After generating the CSR, follow the steps described in Section 5.5.2, "The CMC Enrollment Process" to complete the CMC enrollment.
[ "policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=CN=USDrequest.req_subject_name.cnUSD,DC=example, DC=com", "policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=UID=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=UID=USDrequest.req_subject_name.uidUSD,DC=example, DC=com", "policyset.userCertSet.8.default.class_id=subjectAltNameExtDefaultImpl policyset.userCertSet.8.default.name=Subject Alt Name Constraint policyset.userCertSet.8.default.params.subjAltNameExtCritical=false policyset.userCertSet.8.default.params.subjAltExtType_0=RFC822Name policyset.userCertSet.8.default.params.subjAltExtPattern_0=USDrequest.requestor_emailUSD policyset.userCertSet.8.default.params.subjAltExtGNEnable_0=true", "pkiconsole https://server.example.com:8443/ca", "Authentication InstanceID=SharedToken shrTokAttr=shrTok ldap.ldapconn.host= server.example.com ldap.ldapconn.port= 636 ldap.ldapconn.secureConn=true ldap.ldapauth.bindDN= cn=Directory Manager password= password ldap.ldapauth.authtype=BasicAuth ldap.basedn= ou=People,dc=example,dc=org", "policyset.setID.8.default.params. subjAltExtPattern_0=USDrequest.auth_token.mail[0]USD", "systemctl restart pki-tomcatd-nuxwdog@ instance_name .service", "Identifier: Subject Alternative Name - 2.5.29.17 Critical: no Value: RFC822Name: [email protected]", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-disable profile_name", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-edit profile_name", "policyset.serverCertSet.12.constraint.class_id=noConstraintImpl policyset.serverCertSet.12.constraint.name=No Constraint policyset.serverCertSet.12.default.class_id= commonNameToSANDefaultImpl policyset.serverCertSet.12.default.name= Copy Common Name to Subject", "policyset.userCertSet.list=1,10,2,3,4,5,6,7,8,9 ,12", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-enable profile_name", "prefix .constraint.class_id=noConstraintImpl prefix .constraint.name=No Constraint prefix .default.class_id=userExtensionDefaultImpl prefix .default.name=User supplied extension in CSR prefix .default.params.userExtOID=2.5.29.17", "certutil -R -k ec -q nistp256 -d . -s \"cn= Example Multiple SANs \" --extSAN dns: www.example.com ,dns: www.example.org -a -o /root/request.csr.p10" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing_subject_names_and_subject_alternative_names
Deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform
Deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform Red Hat Decision Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_red_hat_decision_manager_on_red_hat_openshift_container_platform/index
Chapter 28. ImageCVEService
Chapter 28. ImageCVEService 28.1. SuppressCVEs PATCH /v1/imagecves/suppress SuppressCVE suppresses image cves. 28.1.1. Description 28.1.2. Parameters 28.1.2.1. Body Parameter Name Description Required Default Pattern body V1SuppressCVERequest X 28.1.3. Return Type Object 28.1.4. Content Type application/json 28.1.5. Responses Table 28.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 28.1.6. Samples 28.1.7. Common object reference 28.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 28.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 28.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 28.1.7.3. V1SuppressCVERequest Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832. duration String In JSON format, the Duration type is encoded as a string rather than an object, where the string ends in the suffix \"s\" (indicating seconds) and is preceded by the number of seconds, with nanoseconds expressed as fractional seconds. For example, 3 seconds with 0 nanoseconds should be encoded in JSON format as \"3s\", while 3 seconds and 1 nanosecond should be expressed in JSON format as \"3.000000001s\", and 3 seconds and 1 microsecond should be expressed in JSON format as \"3.000001s\". 28.2. UnsuppressCVEs PATCH /v1/imagecves/unsuppress UnsuppressCVE unsuppresses image cves. 28.2.1. Description 28.2.2. Parameters 28.2.2.1. Body Parameter Name Description Required Default Pattern body V1UnsuppressCVERequest X 28.2.3. Return Type Object 28.2.4. Content Type application/json 28.2.5. Responses Table 28.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 28.2.6. Samples 28.2.7. Common object reference 28.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 28.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 28.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 28.2.7.3. V1UnsuppressCVERequest Field Name Required Nullable Type Description Format cves List of string These are (NVD) vulnerability identifiers, cve field of storage.CVE , and not the id field. For example, CVE-2021-44832.
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/imagecveservice
5.2. Uploading an AMI image to AWS
5.2. Uploading an AMI image to AWS This section describes how to upload an AMI image to AWS. Prerequisites Your system must be set up for uploading AWS images. You must have an AWS image created by Image Builder. Use the ami output type in CLI or Amazon Machine Image Disk (.ami) in GUI when creating the image. Procedure 1. Push the image to S3: 2. After the upload to S3 ends, import the image as a snapshot into EC2: Replace my-image with the name of the image. To track progress of the import, run: 3. Create an image from the uploaded snapshot by selecting the snapshot in the EC2 console, right clicking on it and selecting Create Image: Figure 5.1. Create Image 4. Select the Virtualization type of Hardware-assisted virtualization in the image you create: Figure 5.2. Virtualization type 5. Now you can run an instance using whatever mechanism you like (CLI or AWS Console) from the snapshot. Use your private key via SSH to access the resulting EC2 instance. Log in as ec2-user.
[ "USD AMI=8db1b463-91ee-4fd9-8065-938924398428-disk.ami", "aws s3 cp USDAMI s3://USDBUCKET Completed 24.2 MiB/4.4 GiB (2.5 MiB/s) with 1 file(s) remaining", "A printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDAMI > containers.json", "aws ec2 import-snapshot disk-container file://containers.json", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_2
Chapter 10. Using Data Grid in Red Hat JBoss EAP applications
Chapter 10. Using Data Grid in Red Hat JBoss EAP applications Red Hat JBoss EAP includes Data Grid modules that you can use in Red Hat JBoss EAP applications. You can do this in two ways: Include the Data Grid libraries in a Red Hat JBoss EAP application. When you include the Data Grid libraries within an application, the caches are local to the application and cannot be used by other applications. Additionally, the cache configuration is within the application. Use the Data Grid libraries provided by Red Hat JBoss EAP. Using the Data Grid libraries provided by Red Hat JBoss EAP has the following benefits: The cache is shared between applications. The cache configuration is part of Red Hat JBoss EAP standalone or domain XML files. Applications do not include Data Grid libraries, they instead reference the required module from the MANIFEST or jboss-structure.xml configuration files. The following procedures describe using the Data Grid libraries provided by Red Hat JBoss EAP. 10.1. Configuring applications to Use Data Grid modules To use Data Grid libraries provided by Red Hat JBoss EAP in your applications, add Data Grid dependency in the application's pom.xml file. Procedure Import the Data Grid dependency management to control the versions of runtime Maven dependencies. <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-bom</artifactId> <version>USD{version.infinispan.bom}</version> <type>pom</type> <scope>import</scope> </dependency> You must define the value for USD{version.infinispan.bom}`in the `<properties> section of the pom.xml file. Declare the required Data Grid dependencies as provided . pom.xml <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <scope>provided</scope> </dependency> </dependencies> 10.2. Configuring Data Grid caches in Red Hat JBoss EAP Create Data Grid caches in Red Hat JBoss EAP. Prerequisites Red Hat JBoss EAP is running Procedure Connect to the Red Hat JBoss EAP management CLI. Create a cache container. This creates a cache container called exampleCacheContainer with statistics enabled. Add a cache to the cache container. This creates a local cache named exampleCache in the exampleCacheContainer cache container with statistics enabled. 10.3. Using Data Grid caches in Red Hat JBoss EAP applications You can access Data Grid caches in your applications through resource lookup. Prerequisites Red Hat JBoss EAP is running. You have created Data Grid cahches in Red Hat JBoss EAP. Procedure You can lookup Data Grid caches in your applications like this: @Resource(lookup = "java:jboss/infinispan/cache/exampleCacheContainer/exampleCache") private Cache<String, String> ispnCache; This defines a Cache called ispnCache . You can put, get and remove entries from the cache as follows: Get value of a key String value = ispnCache.get(key); This retrieves the value of the key in the cache. If the key is not found, null is returned. Put value in a key String oldValue = ispnCache.put(key,value); This defines a new key if it does not already exist and associates the value passed. If the key already exists, the original value is replaced. Remove a key String value = ispnCache.remove(key); This removes the key from the cache.
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-bom</artifactId> <version>USD{version.infinispan.bom}</version> <type>pom</type> <scope>import</scope> </dependency>", "<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <scope>provided</scope> </dependency> </dependencies>", "jboss-cli.sh --connect", "/subsystem=infinispan/cache-container=exampleCacheContainer:add(statistics-enabled=true)", "/subsystem=infinispan/cache-container=exampleCacheContainer/local-cache=exampleCache:add(statistics-enabled=true)", "@Resource(lookup = \"java:jboss/infinispan/cache/exampleCacheContainer/exampleCache\") private Cache<String, String> ispnCache;", "String value = ispnCache.get(key);", "String oldValue = ispnCache.put(key,value);", "String value = ispnCache.remove(key);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/ispn_modules
Chapter 13. Configuring the cluster network range
Chapter 13. Configuring the cluster network range As a cluster administrator, you can expand the cluster network range after cluster installation. You might want to expand the cluster network range if you need more IP addresses for additional nodes. For example, if you deployed a cluster and specified 10.128.0.0/19 as the cluster network range and a host prefix of 23 , you are limited to 16 nodes. You can expand that to 510 nodes by changing the CIDR mask on a cluster to /14 . When expanding the cluster network address range, your cluster must use the OVN-Kubernetes network plugin . Other network plugins are not supported. The following limitations apply when modifying the cluster network IP address range: The CIDR mask size specified must always be smaller than the currently configured CIDR mask size, because you can only increase IP space by adding more nodes to an installed cluster The host prefix cannot be modified Pods that are configured with an overridden default gateway must be recreated after the cluster network expands 13.1. Expanding the cluster network IP address range You can expand the IP address range for the cluster network. Because this change requires rolling out a new Operator configuration across the cluster, it can take up to 30 minutes to take effect. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To obtain the cluster network range and host prefix for your cluster, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/22","hostPrefix":23}] To expand the cluster network IP address range, enter the following command. Use the CIDR IP address range and host prefix returned from the output of the command. USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"<network>/<cidr>","hostPrefix":<prefix>} ], "networkType": "OVNKubernetes" } }' where: <network> Specifies the network part of the cidr field that you obtained from the step. You cannot change this value. <cidr> Specifies the network prefix length. For example, 14 . Change this value to a smaller number than the value from the output in the step to expand the cluster network range. <prefix> Specifies the current host prefix for your cluster. This value must be the same value for the hostPrefix field that you obtained from the step. Example command USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"10.217.0.0/14","hostPrefix": 23} ], "networkType": "OVNKubernetes" } }' Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take up to 30 minutes for this change to take effect. USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/14","hostPrefix":23}] 13.2. Additional resources About the OVN-Kubernetes network plugin
[ "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"", "[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'", "network.config.openshift.io/cluster patched", "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"", "[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/configuring-cluster-network-range
36.5. Verifying the Initial RAM Disk Image
36.5. Verifying the Initial RAM Disk Image If the system uses the ext3 file system, a SCSI controller, or uses labels to reference partitions in /etc/fstab , an initial RAM disk is needed. The initial RAM disk allows a modular kernel to have access to modules that it might need to boot from before the kernel has access to the device where the modules normally reside. On the Red Hat Enterprise Linux architectures other than IBM eServer iSeries, the initial RAM disk can be created with the mkinitrd command. However, this step is performed automatically if the kernel and its associated packages are installed or upgraded from the RPM packages distributed by Red Hat, Inc; thus, it does not need to be executed manually. To verify that it was created, use the command ls -l /boot to make sure the initrd- <version> .img file was created (the version should match the version of the kernel just installed). On iSeries systems, the initial RAM disk file and vmlinux file are combined into one file, which is created with the addRamDisk command. This step is performed automatically if the kernel and its associated packages are installed or upgraded from the RPM packages distributed by Red Hat, Inc; thus, it does not need to be executed manually. To verify that it was created, use the command ls -l /boot to make sure the /boot/vmlinitrd- <kernel-version> file was created (the version should match the version of the kernel just installed). The step is to verify that the boot loader has been configured to boot the new kernel. Refer to Section 36.6, "Verifying the Boot Loader" for details.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Manually_Upgrading_the_Kernel-Verifying_the_Initial_RAM_Disk_Image
16.2. Configuring Clustered Services
16.2. Configuring Clustered Services The IdM server is not cluster aware . However, it is possible to configure a clustered service to be part of IdM by synchronizing Kerberos keys across all of the participating hosts and configuring services running on the hosts to respond to whatever names the clients use. Enroll all of the hosts in the cluster into the IdM domain. Create any service principals and generate the required keytabs. Collect any keytabs that have been set up for services on the host, including the host keytab at /etc/krb5.keytab . Use the ktutil command to produce a single keytab file that contains the contents of all of the keytab files. For each file, use the rkt command to read the keys from that file. Use the wkt command to write all of the keys which have been read to a new keytab file. Replace the keytab files on each host with the newly-created combined keytab file. At this point, each host in this cluster can now impersonate any other host. Some services require additional configuration to accommodate cluster members which do not reset host names when taking over a failed service. For sshd , set GSSAPIStrictAcceptorCheck no in /etc/ssh/sshd_config . For mod_auth_kerb , set KrbServiceName Any in /etc/httpd/conf.d/auth_kerb.conf . Note For SSL servers, the subject name or a subject alternative name for the server's certificate must appear correct when a client connects to the clustered host. If possible, share the private key among all of the hosts. If each cluster member contains a subject alternative name which includes the names of all the other cluster members, that satisfies any client connection requirements.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/ipa-cluster
6.10. Adding a Cluster Service to the Cluster
6.10. Adding a Cluster Service to the Cluster To configure a cluster service in a cluster, perform the following steps: Add a service to the cluster with the following command: Note Use a descriptive name that clearly distinguishes the service from other services in the cluster. When you add a service to the cluster configuration, you configure the following attributes: autostart - Specifies whether to autostart the service when the cluster starts. Use "1" to enable and "0" to disable; the default is enabled. domain - Specifies a failover domain (if required). exclusive - Specifies a policy wherein the service only runs on nodes that have no other services running on them. recovery - Specifies a recovery policy for the service. The options are to relocate, restart, disable, or restart-disable the service. The restart recovery policy indicates that the system should attempt to restart the failed service before trying to relocate the service to another node. The relocate policy indicates that the system should try to restart the service in a different node. The disable policy indicates that the system should disable the resource group if any component fails. The restart-disable policy indicates that the system should attempt to restart the service in place if it fails, but if restarting the service fails the service will be disabled instead of being moved to another host in the cluster. If you select Restart or Restart-Disable as the recovery policy for the service, you can specify the maximum number of restart failures before relocating or disabling the service, and you can specify the length of time in seconds after which to forget a restart. For example, to add a service to the configuration file on the cluster node node-01.example.com named example_apache that uses the failover domain example_pri , and that has recovery policy of relocate , execute the following command: When configuring services for a cluster, you may find it useful to see a listing of available services for your cluster and the options available for each service. For information on using the ccs command to print a list of available services and their options, see Section 6.11, "Listing Available Cluster Services and Resources" . Add resources to the service with the following command: Depending on the type of resources you want to use, populate the service with global or service-specific resources. To add a global resource, use the --addsubservice option of the ccs to add a resource. For example, to add the global file system resource named web_fs to the service named example_apache on the cluster configuration file on node-01.example.com , execute the following command: To add a service-specific resource to the service, you need to specify all of the service options. For example, if you had not previously defined web_fs as a global service, you could add it as a service-specific resource with the following command: To add a child service to the service, you also use the --addsubservice option of the ccs command, specifying the service options. If you need to add services within a tree structure of dependencies, use a colon (":") to separate elements and brackets to identify subservices of the same type. The following example adds a third nfsclient service as a subservice of an nfsclient service which is in itself a subservice of an nfsclient service which is a subservice of a service named service_a : Note If you are adding a Samba-service resource, add it directly to the service, not as a child of another resource. Note When configuring a dependency tree for a cluster service that includes a floating IP address resource, you must configure the IP resource as the first entry. Note To verify the existence of the IP service resource used in a cluster service, you can use the /sbin/ip addr show command on a cluster node (rather than the obsoleted ifconfig command). The following output shows the /sbin/ip addr show command executed on a node running a cluster service: To remove a service and all of its subservices, execute the following command: To remove a subservice, execute the following command: Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
[ "ccs -h host --addservice servicename [service options]", "ccs -h node-01.example.com --addservice example_apache domain=example_pri recovery=relocate", "ccs -h host --addsubservice servicename subservice [service options]", "ccs -h node01.example.com --addsubservice example_apache fs ref=web_fs", "ccs -h node01.example.com --addsubservice example_apache fs name=web_fs device=/dev/sdd2 mountpoint=/var/www fstype=ext3", "ccs -h node01.example.com --addsubservice service_a nfsclient[1]:nfsclient[2]:nfsclient", "1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1356 qdisc pfifo_fast qlen 1000 link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0 inet6 fe80::205:5dff:fe9a:d891/64 scope link inet 10.11.4.240/22 scope global secondary eth0 valid_lft forever preferred_lft forever", "ccs -h host --rmservice servicename", "ccs -h host --rmsubservice servicename subservice [service options]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-add-service-ccs-CA
Chapter 11. DeploymentLog [apps.openshift.io/v1]
Chapter 11. DeploymentLog [apps.openshift.io/v1] Description DeploymentLog represents the logs for a deployment Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log GET : read log of the specified DeploymentConfig 11.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log Table 11.1. Global path parameters Parameter Type Description name string name of the DeploymentLog namespace string object name and auth scope, such as for teams and projects Table 11.2. Global query parameters Parameter Type Description container string The container for which to stream logs. Defaults to only container if there is one container in the pod. follow boolean Follow if true indicates that the build log should be streamed until the build terminates. limitBytes integer If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit. nowait boolean NoWait if true causes the call to return immediately even if the deployment is not available yet. Otherwise the server will wait until the deployment has started. pretty string If 'true', then the output is pretty printed. boolean Return deployment logs. Defaults to false. sinceSeconds integer A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified. tailLines integer If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime timestamps boolean If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false. version integer Version of the deployment for which to view logs. HTTP method GET Description read log of the specified DeploymentConfig Table 11.3. HTTP responses HTTP code Reponse body 200 - OK DeploymentLog schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/deploymentlog-apps-openshift-io-v1
Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service
Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service Red Hat Developer Hub 1.2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/index
Chapter 11. Configuring the audit log policy
Chapter 11. Configuring the audit log policy You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use. 11.1. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server, Kubernetes API server, OpenShift OAuth API server, and OpenShift OAuth server. OpenShift Container Platform provides the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch , delete , deletecollection ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] None No requests are logged; even OAuth access token requests and OAuth authorize token requests are not logged. Custom rules are ignored when this profile is set. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Sensitive resources, such as Secret , Route , and OAuthClient objects, are only ever logged at the metadata level. OpenShift OAuth server events are only ever logged at the metadata level. By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O). 11.2. Configuring the audit log policy You can configure the audit log policy to use when logging requests that come to the API servers. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Update the spec.audit.profile field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: WriteRequestBodies 1 1 Set to Default , WriteRequestBodies , AllRequestBodies , or None . The default profile is Default . Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 11.3. Configuring the audit log policy with custom rules You can configure an audit log policy that defines custom rules. You can specify multiple groups and define which profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. Important Custom rules are ignored if the top-level profile field is set to None . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Add the spec.audit.customRules field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2 1 Add one or more groups and specify the profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. 2 Set to Default , WriteRequestBodies , or AllRequestBodies . If you do not set this top-level profile field, it defaults to the Default profile. Warning Do not set the top-level profile field to None if you want to use custom rules. Custom rules are ignored if the top-level profile field is set to None . Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 11.4. Disabling audit logging You can disable audit logging for OpenShift Container Platform. When you disable audit logging, even OAuth access token requests and OAuth authorize token requests are not logged. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Set the spec.audit.profile field to None : apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: None Note You can also disable audit logging only for specific groups by specifying custom rules in the spec.audit.customRules field. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12
[ "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/audit-log-policy-config
8.112. libguestfs
8.112. libguestfs 8.112.1. RHBA-2014:1458 - libguestfs bug fix update Updated libguestfs packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The libguestfs packages contain a library, which is used for accessing and modifying virtual machine (VM) disk images. This update also fixes the following bugs: Note The virt-sysprep packages have been upgraded to upstream version 1.24.6, which provides a number of enhancements over the version, namely new features such as removing unnecessary files, including the possibility to specify custom paths, and setting user passwords. (BZ# 1037166 ) This update also fixes the following bugs: Bug Fixes BZ# 624335 The blockdev_setbsz API has been deprecated as the underlying implementation ("blockdev --setbsz") is no longer considered useful. BZ# 965495 Prior to this update, the gdisk utility was not available as a dependency on the libguestfs library. Consequently, gdisk was not available, and thus guestfs_part_get_gpt_type and guestfs_part_set_gpt_type APIs were not usable. This update adds gdisk as a dependency, and thus is now available as well as the aforementioned APIs. BZ# 982979 Due to the fstrim feature marked as available, calling the fstrim API returned errors. As fstrim does not work with both the kernel and QEMU available in Red Hat Enterprise Linux 6, this update disables fstrim. Now, calling the fstrim API reports that fstrim is no longer available. BZ# 1056558 Previously, when the virt-sparsify utility was run with a block or character device as output, the output device was overwritten by a file, or deleted. To fix this bug, if a block or character device is specified as output, virt-sparsify refuses to run. BZ# 1072062 Due to wrong implementation of the Guestfs.new() constructor in the Ruby binding, creating a new Guestfs instance often resulted in an error. With this update, the implementation of Guestfs.new() has been rewritten, and Guestfs.new now works correctly. BZ# 1091805 When running the tar-in guestfish command, or using the tar_in guestfs API, with a non-existing input tar or to a non-existing destination directory, the libguestfs appliance terminated unexpectedly. The error checking has been approved, and the tar-in now cleanly returns an error. BZ# 1056558 Previously, the virt-sparsify utility did not check for free space available in the temporary directory. Consequently, virt-sparsify became unresponsive if the temporary directory lacked enough free space. With this update, virt-sparsify checks by default for available space before the sparsification operation, and virt-sparsify now works as intended. BZ# 1106548 Previously, particular permission checks for the root user in the FUSE layer of the libguestfs library were missing. Consequently, mounting a disk image using guestmount and accessing a directory as root with permissions 700 and not owned by root failed with the "permission denied" error. With this update, permissions have been disabled, and root can now access any directory of a disk image mounted using guestmount. BZ# 1123794 The version of libguestfs library was not closing all the open file descriptors when forking subprocesses such as QEMU. Consequently, if the parent process or any non-libguestfs library did not atomically set the O_CLOEXEC flag on file descriptors, the parent process leaked into QEMU, In OpenStack, this bug also caused deadlocks, because Python 2 is unable to set O_CLOEXEC atomically. With this update, libguestfs closes all the file descriptors before executing QEMU, and deadlocks no longer occur. BZ# 1079182 Previously, the libguestfs library was skipping partitions with type 0x42, Windows Lightweight Device Mounter (LDM) volumes, when LDM was not available. Consequently, simple LDM volumes mountable as a single partition were ignored. With this update, the partition detection is not skipped if LDM is missing, and simple LDM volumes can now be recognized and mounted as plain NTFS partitions. The virt-sysprep packages have been upgraded to upstream version 1.24.6, which provides a number of enhancements over the version, namely new features such as removing unnecessary files, including the possibility to specify custom paths, and setting user passwords. (BZ#1037166) Users of libguestfs are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libguestfs
Chapter 6. Composable Services
Chapter 6. Composable Services Red Hat OpenStack Platform now includes the ability to define custom roles and compose service combinations on roles, see Composable Services and Custom Roles in the Advanced Overcloud Customization guide. As part of the integration, you can define your own custom services and include them on chosen roles. This section explores the composable service architecture and provides an example of how to integrate a custom service into the composable service architecture. 6.1. Examining Composable Service Architecture The core Heat template collection contains two sets of composable service templates: deployment contains the templates for key OpenStack Platform services. puppet/services contains legacy templates for configuring composable services. In some cases, the composable services use templates from this directory for compatibility. In most cases, the composable services use the templates in the deployment directory. Each template contains a description that identifies its purpose. For example, the deployment/time/ntp-baremetal-puppet.yaml service template contains the following description: These service templates are registered as resources specific to a Red Hat OpenStack Platform deployment. This means you can call each resource using a unique Heat resource namespace defined in the overcloud-resource-registry-puppet.j2.yaml file. All services use the OS::TripleO::Services namespace for their resource type. Some resources use the base composable service templates directly. For example: However, core services require containers and use the containerized service templates. For example, the keystone containerized service uses the following resource: These containerized templates usually reference other templates to include dependencies. For example, the deployment/keystone/keystone-container-puppet.yaml template stores the output of the base template in the ContainersCommon resource: The containerized template can then incorporate functions and data from the containers-common.yaml template. The overcloud.j2.yaml Heat template includes a section of Jinja2-based code to define a service list for each custom role in the roles_data.yaml file: For the default roles, this creates the following service list parameters: ControllerServices , ComputeServices , BlockStorageServices , ObjectStorageServices , and CephStorageServices . You define the default services for each custom role in the roles_data.yaml file. For example, the default Controller role contains the following content: These services are then defined as the default list for the ControllerServices parameter. Note You can also use an environment file to override the default list for the service parameters. For example, you can define ControllerServices as a parameter_default in an environment file to override the services list from the roles_data.yaml file. 6.2. Creating a User-Defined Composable Service This example examines how to create a user-defined composable service and focuses on implementing a message of the day ( motd ) service. This example assumes the overcloud image contains a custom motd Puppet module loaded either through a configuration hook or through modifying the overcloud images as per Chapter 3, Overcloud Images . When creating your own service, there are specific items to define in the service's Heat template: parameters The following are compulsory parameters that you must include in your service template: ServiceNetMap - A map of services to networks. Use an empty hash ( {} ) as the default value as this parameter is overriden with values from the parent Heat template. DefaultPasswords - A list of default passwords. Use an empty hash ( {} ) as the default value as this parameter is overriden with values from the parent Heat template. EndpointMap - A list of OpenStack service endpoints to protocols. Use an empty hash ( {} ) as the default value as this parameter is overriden with values from the parent Heat template. Define any additional parameters that your service requires. outputs The following output parameters define the service configuration on the host. See Appendix A, Composable service parameters for information on all composable service parameters. The following is an example Heat template ( service.yaml ) for the motd service: 1 The template includes a MotdMessage parameter used to define the message of the day. The parameter includes a default message but you can override it using the same parameter in a custom environment file, which is demonstrated later. 2 The outputs section defines some service hieradata in config_settings . The motd::content hieradata stores the content from the MotdMessage parameter. The motd Puppet class eventually reads this hieradata and passes the user-defined message to the /etc/motd file. 3 The outputs section includes a Puppet manifest snippet in step_config . The snippet checks if the configuration has reached step 2 and, if so, runs the motd Puppet class. 6.3. Including a User-Defined Composable Service The aim for this example is to configure the custom motd service only on our overcloud's Controller nodes. This requires a custom environment file and custom roles data file included with our deployment. First, add the new service to an environment file ( env-motd.yaml ) as a registered Heat resource within the OS::TripleO::Services namespace. For this example, the resource name for our motd service is OS::TripleO::Services::Motd : Note that our custom environment file also includes a new message that overrides the default for MotdMessage . The deployment will now include the motd service. However, each role that requires this new service must have an updated ServicesDefault listing in a custom roles_data.yaml file. In this example, we aim to only configure the service on Controller nodes. Create a copy of the default roles_data.yaml file: Edit this file, scroll to the Controller role, and include the service in the ServicesDefault listing: When creating an overcloud, include the resulting environment file and the custom_roles_data.yaml file with your other environment files and deployment options: This includes our custom motd service in our deployment and configures the service on Controller nodes only.
[ "description: > NTP service deployment using puppet, this YAML file creates the interface between the HOT template and the puppet manifest that actually installs and configure NTP.", "resource_registry: OS::TripleO::Services::Ntp: deployment/time/ntp-baremetal-puppet.yaml", "resource_registry: OS::TripleO::Services::Keystone: deployment/keystone/keystone-container-puppet.yaml", "resources: ContainersCommon: type: ../containers-common.yaml", "{{role.name}}Services: description: A list of service resources (configured in the Heat resource_registry) which represent nested stacks for each service that should get installed on the {{role.name}} role. type: comma_delimited_list default: {{role.ServicesDefault|default([])}}", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Core - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry", "heat_template_version: 2016-04-08 description: > Message of the day service configured with Puppet parameters: ServiceNetMap: default: {} type: json DefaultPasswords: default: {} type: json EndpointMap: default: {} type: json MotdMessage: 1 default: | Welcome to my Red Hat OpenStack Platform environment! type: string description: The message to include in the motd outputs: role_data: description: Motd role using composable services. value: service_name: motd config_settings: 2 motd::content: {get_param: MotdMessage} step_config: | 3 if hiera('step') >= 2 { include ::motd }", "resource_registry: OS::TripleO::Services::Motd: /home/stack/templates/motd.yaml parameter_defaults: MotdMessage: | You have successfully accessed my Red Hat OpenStack Platform environment!", "cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml ~/custom_roles_data.yaml", "- name: Controller CountDefault: 1 ServicesDefault: - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephExternal - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::VipHosts - OS::TripleO::Services::Motd # Add the service to the end", "openstack overcloud deploy --templates -e /home/stack/templates/env-motd.yaml -r ~/custom_roles_data.yaml [OTHER OPTIONS]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/Composable_Services
Chapter 1. Data Grid Operator
Chapter 1. Data Grid Operator Data Grid Operator provides operational intelligence and reduces management complexity for deploying Data Grid on Kubernetes and Red Hat OpenShift. 1.1. Data Grid Operator deployments When you install Data Grid Operator, it extends the Kubernetes API with Custom Resource Definitions (CRDs) for deploying and managing Data Grid clusters on Red Hat OpenShift. To interact with Data Grid Operator, OpenShift users apply Custom Resources (CRs) through the OpenShift Web Console or oc client. Data Grid Operator listens for Infinispan CRs and automatically provisions native resources, such as StatefulSets and Secrets, that your Data Grid deployment requires. Data Grid Operator also configures Data Grid services according to the specifications in Infinispan CRs, including the number of pods for the cluster and backup locations for cross-site replication. Figure 1.1. Custom resources 1.2. Cluster management A single Data Grid Operator installation can manage multiple clusters with different Data Grid versions in separate namespaces. Each time a user applies CRs to modify the deployment, Data Grid Operator applies the changes globally to all Data Grid clusters. Figure 1.2. Operator-managed clusters 1.3. Resource reconciliation Data Grid Operator reconciles custom resources such as the Cache CR with resources on your Data Grid cluster. Bidirectional reconciliation synchronizes your CRs with changes that you make to Data Grid resources through the Data Grid Console, command line interface (CLI), or other client application and vice versa. For example if you create a cache through the Data Grid Console then Data Grid Operator adds a declarative Kubernetes representation. To perform reconciliation Data Grid Operator creates a listener pod for each Data Grid cluster that detects modifications for Infinispan resources. Notes about reconciliation When you create a cache through the Data Grid Console, CLI, or other client application, Data Grid Operator creates a corresponding Cache CR with a unique name that conforms to the Kubernetes naming policy. Declarative Kubernetes representations of Data Grid resources that Data Grid Operator creates with the listener pod are linked to Infinispan CRs. Deleting Infinispan CRs removes any associated resource declarations.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/operator
13.2.22. Creating Domains: Access Control
13.2.22. Creating Domains: Access Control SSSD provides a rudimentary access control for domain configuration, allowing either simple user allow/deny lists or using the LDAP back end itself. Using the Simple Access Provider The Simple Access Provider allows or denies access based on a list of user names or groups. The Simple Access Provider is a way to restrict access to certain, specific machines. For example, if a company uses laptops, the Simple Access Provider can be used to restrict access to only a specific user or a specific group, even if a different user authenticated successfully against the same authentication provider. The most common options are simple_allow_users and simple_allow_groups , which grant access explicitly to specific users (either the given users or group members) and deny access to everyone else. It is also possible to create deny lists (which deny access only to explicit people and implicitly allow everyone else access). The Simple Access Provider adheres to the following four rules to determine which users should or should not be granted access: If both the allow and deny lists are empty, access is granted. If any list is provided, allow rules are evaluated first, and then deny rules. Practically, this means that deny rules supersede allow rules. If an allowed list is provided, then all users are denied access unless they are in the list. If only deny lists are provided, then all users are allowed access unless they are in the list. This example grants access to two users and anyone who belongs to the IT group; implicitly, all other users are denied: Note The LOCAL domain in SSSD does not support simple as an access provider. Other options are listed in the sssd-simple man page, but these are rarely used. Using the Access Filters An LDAP, Active Directory, or Identity Management server can provide access control rules for a domain. The associated options ( ldap_access_filter for LDAP and IdM and ad_access_filter for AD) specify which users are granted access to the specified host. The user filter must be used or all users are denied access. See the examples below: Note Offline caching for LDAP access providers is limited to determining whether the user's last online login attempt was successful. Users that were granted access during their last login will continue to be granted access while offline. SSSD can also check results by the authorizedService or host attribute in an entry. In fact, all options - LDAP filter, authorizedService , and host - can be evaluated, depending on the user entry and the configuration. The ldap_access_order parameter lists all access control methods to use, in order of how they should be evaluated. The attributes in the user entry to use to evaluate authorized services or allowed hosts can be customized. Additional access control parameters are listed in the sssd-ldap(5) man page.
[ "[domain/example.com] access_provider = simple simple_allow_users = jsmith,bjensen simple_allow_groups = itgroup", "[domain/example.com] access_provider = ldap ldap_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com", "[domain/example.com] access_provider = ad ad_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com", "[domain/example.com] access_provider = ldap ldap_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com ldap_access_order = filter, host, authorized_service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-config-sssd-domain-access
Chapter 1. Understanding networking
Chapter 1. Understanding networking Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: Service types, such as node ports or load balancers API resources, such as Ingress and Route By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Note Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block. This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true . If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. 1.1. OpenShift Container Platform DNS If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable. For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. 1.2. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 1.2.1. Comparing routes and Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy , which is an open source load balancer solution. The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/understanding-networking
Scalability and performance
Scalability and performance OpenShift Container Platform 4.17 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/index
Chapter 3. Eclipse Temurin features
Chapter 3. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 11 release of Eclipse Temurin includes, see OpenJDK 11.0.26 Released . New features and enhancements Eclipse Temurin 11.0.26 includes the following new features and enhancements. Option for jar command to avoid overwriting files when extracting an archive In earlier OpenJDK releases, when the jar tool extracted files from an archive, the jar tool overwrote any existing files with the same name in the target directory. OpenJDK 11.0.26 adds a new โ€k (or โ€โ€keep-old-files ) option that you can use to ensure that the jar tool does not overwrite existing files. You can specify this new option in either short or long format. For example: jar xkf myfile.jar jar --extract โ€โ€keep-old-files โ€โ€file myfile.jar Note In OpenJDK 11.0.26, the jar tool retains the old behavior by default. If you do not explicitly specify the โ€k (or โ€โ€keep-old-files ) option, the jar tool automatically overwrites any existing files with the same name. See JDK-8335912 (JDK Bug System) and JDK bug system reference ID: JDK-8337499. IANA time zone database updated to version 2024b In OpenJDK 11.0.26, the in-tree copy of the Internet Assigned Numbers Authority (IANA) time zone database is updated to version 2024b. This update is primarily concerned with improving historical data for Mexico, Mongolia, and Portugal. This update to the IANA database also includes the following changes: Asia/Choibalsan is an alias for Asia/Ulaanbaatar . The Middle European Time (MET) time zone is equal to Central European Time (CET). Some legacy time-zone IDs are mapped to geographical names rather than fixed offsets: Eastern Standard Time (EST) is mapped to America/Panama rather than -5:00 . Mountain Standard Time (MST) is mapped to America/Phoenix rather than -7:00 . Hawaii Standard Time (HST) is mapped to Pacific/Honolulu rather than -10:00 . OpenJDK overrides the change in the legacy time-zone ID mappings by retaining the existing fixed-offset mapping. See JDK-8339637 (JDK Bug System) . Revised on 2025-02-10 17:43:33 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.26/openjdk-temurin-features-11-0-26_openjdk
Chapter 20. Configuring an Installed Linux on IBM Z Instance
Chapter 20. Configuring an Installed Linux on IBM Z Instance For more information about Linux on IBM Z, see the publications listed in Chapter 22, IBM Z References . Some of the most common tasks are described here. 20.1. Adding DASDs DASDs ( Direct Access Storage Devices ) are a type of storage commonly used with IBM Z. Additional information about working with these storage devices can be found at the IBM Knowledge Center at http://www-01.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_t_dasd_wrk.html . The following is an example of how to set a DASD online, format it, and make the change persistent. Note Make sure the device is attached or linked to the Linux system if running under z/VM. To link a mini disk to which you have access, issue, for example: See z/VM: CP Commands and Utilities Reference, SC24-6175 for details about the commands. 20.1.1. Dynamically Setting DASDs Online To set a DASD online, follow these steps: Use the cio_ignore utility to remove the DASD from the list of ignored devices and make it visible to Linux: Replace device_number with the device number of the DASD. For example: Set the device online. Use a command of the following form: Replace device_number with the device number of the DASD. For example: As an alternative, you can set the device online using sysfs attributes: Use the cd command to change to the /sys/ directory that represents that volume: Check to see if the device is already online: If it is not online, enter the following command to bring it online: Verify which block devnode it is being accessed as: As shown in this example, device 4B2E is being accessed as /dev/dasdb. These instructions set a DASD online for the current session, but this is not persistent across reboots. For instructions on how to set a DASD online persistently, see Section 20.1.3, "Persistently Setting DASDs Online" . When you work with DASDs, use the persistent device symbolic links under /dev/disk/by-path/ . See the chapter about persistent storage device naming in the Red Hat Enterprise Linux 7 Storage Administration Guide for more in-depth information about different ways to consistently refer to storage devices. 20.1.2. Preparing a New DASD with Low-level Formatting Once the disk is online, change back to the /root directory and low-level format the device. This is only required once for a DASD during its entire lifetime: When the progress bar reaches the end and the format is complete, dasdfmt prints the following output: Now, use fdasd to partition the DASD. You can create up to three partitions on a DASD. In our example here, we create one partition spanning the whole disk: After a (low-level formatted) DASD is online, it can be used like any other disk under Linux. For instance, you can create file systems, LVM physical volumes, or swap space on its partitions, for example /dev/disk/by-path/ccw-0.0.4b2e-part1 . Never use the full DASD device ( dev/dasdb ) for anything but the commands dasdfmt and fdasd . If you want to use the entire DASD, create one partition spanning the entire drive as in the fdasd example above. To add additional disks later without breaking existing disk entries in, for example, /etc/fstab , use the persistent device symbolic links under /dev/disk/by-path/ . 20.1.3. Persistently Setting DASDs Online The above instructions described how to activate DASDs dynamically in a running system. However, such changes are not persistent and do not survive a reboot. Making changes to the DASD configuration persistent in your Linux system depends on whether the DASDs belong to the root file system. Those DASDs required for the root file system need to be activated very early during the boot process by the initramfs to be able to mount the root file system. The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. 20.1.3.1. DASDs That Are Part of the Root File System The only file you have to modify to add DASDs that are part of the root file system is /etc/zipl.conf . Then run the zipl boot loader tool. There is no need to recreate the initramfs . There is one boot option to activate DASDs early in the boot process: rd.dasd= . This option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. Below is an example zipl.conf for a system that uses physical volumes on partitions of two DASDs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system. Suppose that you want to add another physical volume on a partition of a third DASD with device bus ID 0.0.202b . To do this, add rd.dasd=0.0.202b to the parameters line of your boot kernel in zipl.conf : Warning Make sure the length of the kernel command line in /etc/zipl.conf does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails. Run zipl to apply the changes of /etc/zipl.conf for the IPL: 20.1.3.2. DASDs That Are Not Part of the Root File System DASDs that are not part of the root file system, that is, data disks , are persistently configured in the file /etc/dasd.conf . It contains one DASD per line. Each line begins with the device bus ID of a DASD. Optionally, each line can continue with options separated by space or tab characters. Options consist of key-value-pairs, where the key and value are separated by an equals sign. The key corresponds to any valid sysfs attribute a DASD can have. The value will be written to the key's sysfs attribute. Entries in /etc/dasd.conf are activated and configured by udev when a DASD is added to the system. At boot time, all DASDs visible to the system get added and trigger udev . Example content of /etc/dasd.conf : Modifications of /etc/dasd.conf only become effective after a reboot of the system or after the dynamic addition of a new DASD by changing the system's I/O configuration (that is, the DASD is attached under z/VM). Alternatively, you can trigger the activation of a new entry in /etc/dasd.conf for a DASD which was previously not active, by executing the following commands: Use the cio_ignore utility to remove the DASD from the list of ignored devices and make it visible to Linux: For example: Trigger the activation by writing to the uevent attribute of the device: For example:
[ "CP ATTACH EB1C TO *", "CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W", "cio_ignore -r device_number", "cio_ignore -r 4b2e", "chccwdev -e device_number", "chccwdev -e 4b2e", "cd /sys/bus/ccw/drivers/dasd-eckd/0.0.4b2e/ # ls -l total 0 -r--r--r-- 1 root root 4096 Aug 25 17:04 availability -rw-r--r-- 1 root root 4096 Aug 25 17:04 cmb_enable -r--r--r-- 1 root root 4096 Aug 25 17:04 cutype -rw-r--r-- 1 root root 4096 Aug 25 17:04 detach_state -r--r--r-- 1 root root 4096 Aug 25 17:04 devtype -r--r--r-- 1 root root 4096 Aug 25 17:04 discipline -rw-r--r-- 1 root root 4096 Aug 25 17:04 online -rw-r--r-- 1 root root 4096 Aug 25 17:04 readonly -rw-r--r-- 1 root root 4096 Aug 25 17:04 use_diag", "cat online 0", "echo 1 > online # cat online 1", "ls -l total 0 -r--r--r-- 1 root root 4096 Aug 25 17:04 availability lrwxrwxrwx 1 root root 0 Aug 25 17:07 block -> ../../../../block/dasdb -rw-r--r-- 1 root root 4096 Aug 25 17:04 cmb_enable -r--r--r-- 1 root root 4096 Aug 25 17:04 cutype -rw-r--r-- 1 root root 4096 Aug 25 17:04 detach_state -r--r--r-- 1 root root 4096 Aug 25 17:04 devtype -r--r--r-- 1 root root 4096 Aug 25 17:04 discipline -rw-r--r-- 1 root root 0 Aug 25 17:04 online -rw-r--r-- 1 root root 4096 Aug 25 17:04 readonly -rw-r--r-- 1 root root 4096 Aug 25 17:04 use_diag", "cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%", "Rereading the partition table Exiting", "fdasd -a /dev/disk/by-path/ccw-0.0.4b2e auto-creating one partition for the whole disk writing volume label writing VTOC checking ! wrote NATIVE! rereading partition table", "[defaultboot] default=linux target=/boot/ [linux] image=/boot/vmlinuz-2.6.32-19.el7.s390x ramdisk=/boot/initramfs-2.6.32-19.el7.s390x.img parameters=\"root=/dev/mapper/vg_devel1-lv_root rd.dasd=0.0.0200,use_diag=0,readonly=0,erplog=0,failfast=0 rd.dasd=0.0.0207,use_diag=0,readonly=0,erplog=0,failfast=0 rd_LVM_LV=vg_devel1/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!condev\"", "[defaultboot] default=linux target=/boot/ [linux] image=/boot/vmlinuz-2.6.32-19.el7.s390x ramdisk=/boot/initramfs-2.6.32-19.el7.s390x.img parameters=\"root=/dev/mapper/vg_devel1-lv_root rd.dasd=0.0.0200,use_diag=0,readonly=0,erplog=0,failfast=0 rd.dasd=0.0.0207,use_diag=0,readonly=0,erplog=0,failfast=0 rd.dasd=0.0.202b rd_LVM_LV=vg_devel1/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!condev\"", "zipl -V Using config file '/etc/zipl.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 3308 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 595416 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section 'linux' (default) kernel image......: /boot/vmlinuz-2.6.32-19.el7.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root rd.dasd=0.0.0200,use_diag=0,readonly=0,erplog=0,failfast=0 rd.dasd=0.0.0207,use_diag=0,readonly=0,erplog=0,failfast=0 rd.dasd=0.0.202b rd_LVM_LV=vg_devel1/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us cio_ignore=all,!condev' initial ramdisk...: /boot/initramfs-2.6.32-19.el7.s390x.img component address: kernel image....: 0x00010000-0x00a70fff parmline........: 0x00001000-0x00001fff initial ramdisk.: 0x02000000-0x022d2fff internal loader.: 0x0000a000-0x0000afff Preparing boot device: dasda (0201). Preparing boot menu Interactive prompt......: enabled Menu timeout............: 15 seconds Default configuration...: 'linux' Syncing disks Done.", "0.0.0207 0.0.0200 use_diag=1 readonly=1", "cio_ignore -r device_number", "cio_ignore -r 021a", "echo add > /sys/bus/ccw/devices/ device-bus-ID /uevent", "echo add > /sys/bus/ccw/devices/0.0.021a/uevent" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-post-installation-configuration-s390
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_7/proc_providing-feedback-on-red-hat-documentation_getting-started-with-dotnet-on-rhel-7
4.7. Modifying and Deleting Fencing Devices
4.7. Modifying and Deleting Fencing Devices Use the following command to modify or add options to a currently configured fencing device. Use the following command to remove a fencing device from the current configuration.
[ "pcs stonith update stonith_id [ stonith_device_options ]", "pcs stonith delete stonith_id" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicemodify-haar
Chapter 16. Managing instances
Chapter 16. Managing instances As a cloud administrator, you can monitor and manage the instances running on your cloud. 16.1. Securing connections to the VNC console of an instance You can secure connections to the VNC console for an instance by configuring the allowed TLS ciphers and the minimum protocol version to enforce for incoming client connections to the VNC proxy service. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open your Compute environment file. Configure the minimum protocol version to use for VNC console connections to instances: Replace <version> with the minimum allowed SSL/TLS protocol version. Set to one of the following valid values: default : Uses the underlying system OpenSSL defaults. tlsv1_1 : Use if you have clients that do not support a later version. Note TLS 1.0 and TLS 1.1 are deprecated in RHEL 8, and not supported in RHEL 9. tlsv1_2 : Use if you want to configure the SSL/TLS ciphers to use for VNC console connections to instances. If you set the minimum allowed SSL/TLS protocol version to tlsv1_2 , then configure the SSL/TLS ciphers to use for VNC console connections to instances: Replace <ciphers> with a colon-delimited list of the cipher suites to allow. Retrieve the list of available ciphers from openssl . Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 16.2. Database cleaning The Compute service includes an administrative tool, nova-manage , that you can use to perform deployment, upgrade, clean-up, and maintenance-related tasks, such as applying database schemas, performing online data migrations during an upgrade, and managing and cleaning up the database. Director automates the following database management tasks on the overcloud by using cron: Archives deleted instance records by moving the deleted rows from the production tables to shadow tables. Purges deleted rows from the shadow tables after archiving is complete. 16.2.1. Configuring database management The cron jobs use default settings to perform database management tasks. By default, the database archiving cron jobs run daily at 00:01, and the database purging cron jobs run daily at 05:00, both with a jitter between 0 and 3600 seconds. You can modify these settings as required by using heat parameters. Procedure Open your Compute environment file. Add the heat parameter that controls the cron job that you want to add or modify. For example, to purge the shadow tables immediately after they are archived, set the following parameter to "True": For a complete list of the heat parameters to manage database cron jobs, see Configuration options for the Compute service automated database management . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 16.2.2. Configuration options for the Compute service automated database management Use the following heat parameters to enable and modify the automated cron jobs that manage the database. Table 16.1. Compute (nova) service cron parameters Parameter Description NovaCronArchiveDeleteAllCells Set this parameter to "True" to archive deleted instance records from all cells. Default: True NovaCronArchiveDeleteRowsAge Use this parameter to archive deleted instance records based on their age in days. Set to 0 to archive data older than today in shadow tables. Default: 90 NovaCronArchiveDeleteRowsDestination Use this parameter to configure the file for logging deleted instance records. Default: /var/log/nova/nova-rowsflush.log NovaCronArchiveDeleteRowsHour Use this parameter to configure the hour at which to run the cron command to move deleted instance records to another table. Default: 0 NovaCronArchiveDeleteRowsMaxDelay Use this parameter to configure the maximum delay, in seconds, before moving deleted instance records to another table. Default: 3600 NovaCronArchiveDeleteRowsMaxRows Use this parameter to configure the maximum number of deleted instance records that can be moved to another table. Default: 1000 NovaCronArchiveDeleteRowsMinute Use this parameter to configure the minute past the hour at which to run the cron command to move deleted instance records to another table. Default: 1 NovaCronArchiveDeleteRowsMonthday Use this parameter to configure on which day of the month to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronArchiveDeleteRowsMonth Use this parameter to configure in which month to run the cron command to move deleted instance records to another table. Default: * (every month) NovaCronArchiveDeleteRowsPurge Set this parameter to "True" to purge shadow tables immediately after scheduled archiving. Default: False NovaCronArchiveDeleteRowsUntilComplete Set this parameter to "True" to continue to move deleted instance records to another table until all records are moved. Default: True NovaCronArchiveDeleteRowsUser Use this parameter to configure the user that owns the crontab that archives deleted instance records and that has access to the log file the crontab uses. Default: nova NovaCronArchiveDeleteRowsWeekday Use this parameter to configure on which day of the week to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronPurgeShadowTablesAge Use this parameter to purge shadow tables based on their age in days. Set to 0 to purge shadow tables older than today. Default: 14 NovaCronPurgeShadowTablesAllCells Set this parameter to "True" to purge shadow tables from all cells. Default: True NovaCronPurgeShadowTablesDestination Use this parameter to configure the file for logging purged shadow tables. Default: /var/log/nova/nova-rowspurge.log NovaCronPurgeShadowTablesHour Use this parameter to configure the hour at which to run the cron command to purge shadow tables. Default: 5 NovaCronPurgeShadowTablesMaxDelay Use this parameter to configure the maximum delay, in seconds, before purging shadow tables. Default: 3600 NovaCronPurgeShadowTablesMinute Use this parameter to configure the minute past the hour at which to run the cron command to purge shadow tables. Default: 0 NovaCronPurgeShadowTablesMonth Use this parameter to configure in which month to run the cron command to purge the shadow tables. Default: * (every month) NovaCronPurgeShadowTablesMonthday Use this parameter to configure on which day of the month to run the cron command to purge the shadow tables. Default: * (every day) NovaCronPurgeShadowTablesUser Use this parameter to configure the user that owns the crontab that purges the shadow tables and that has access to the log file the crontab uses. Default: nova NovaCronPurgeShadowTablesVerbose Use this parameter to enable verbose logging in the log file for purged shadow tables. Default: False NovaCronPurgeShadowTablesWeekday Use this parameter to configure on which day of the week to run the cron command to purge the shadow tables. Default: * (every day) 16.3. Migrating virtual machine instances between Compute nodes You sometimes need to migrate instances from one Compute node to another Compute node in the overcloud, to perform maintenance, rebalance the workload, or replace a failed or failing node. Compute node maintenance If you need to temporarily take a Compute node out of service, for instance, to perform hardware maintenance or repair, kernel upgrades and software updates, you can migrate instances running on the Compute node to another Compute node. Failing Compute node If a Compute node is about to fail and you need to service it or replace it, you can migrate instances from the failing Compute node to a healthy Compute node. Failed Compute nodes If a Compute node has already failed, you can evacuate the instances. You can rebuild instances from the original image on another Compute node, using the same name, UUID, network addresses, and any other allocated resources the instance had before the Compute node failed. Workload rebalancing You can migrate one or more instances to another Compute node to rebalance the workload. For example, you can consolidate instances on a Compute node to conserve power, migrate instances to a Compute node that is physically closer to other networked resources to reduce latency, or distribute instances across Compute nodes to avoid hot spots and increase resiliency. Director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide the users of each host with access to other Compute nodes during the migration process. Director creates this key using the OS::TripleO::Services::NovaCompute composable service. This composable service is one of the main services included on all Compute roles by default. For more information, see Composable Services and Custom Roles in the Advanced Overcloud Customization guide. Note If you have a functioning Compute node, and you want to make a copy of an instance for backup purposes, or to copy the instance to a different environment, follow the procedure in Importing virtual machines into the overcloud in the Director Installation and Usage guide. 16.3.1. Migration types Red Hat OpenStack Platform (RHOSP) supports the following types of migration. Cold migration Cold migration, or non-live migration, involves shutting down a running instance before migrating it from the source Compute node to the destination Compute node. Cold migration involves some downtime for the instance. The migrated instance maintains access to the same volumes and IP addresses. Note Cold migration requires that both the source and destination Compute nodes are running. Live migration Live migration involves moving the instance from the source Compute node to the destination Compute node without shutting it down, and while maintaining state consistency. Live migrating an instance involves little or no perceptible downtime. However, live migration does impact performance for the duration of the migration operation. Therefore, instances should be taken out of the critical path while being migrated. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwith, memory bandwidth, storage IO, or CPU peformance during live migration. Note Live migration requires that both the source and destination Compute nodes are running. In some cases, instances cannot use live migration. For more information, see Migration constraints . Evacuation If you need to migrate instances because the source Compute node has already failed, you can evacuate the instances. 16.3.2. Migration constraints Migration constraints typically arise with block migration, configuration disks, or when one or more instances access physical hardware on the Compute node. CPU constraints The source and destination Compute nodes must have the same CPU architecture. For example, Red Hat does not support migrating an instance from an x86_64 CPU to a ppc64le CPU. Migration between different CPU models is not supported. In some cases, the CPU of the source and destination Compute node must match exactly, such as instances that use CPU host passthrough. In all cases, the CPU features of the destination node must be a superset of the CPU features on the source node. Memory constraints The destination Compute node must have sufficient available RAM. Memory oversubscription can cause migration to fail. Block migration constraints Migrating instances that use disks that are stored locally on a Compute node takes significantly longer than migrating volume-backed instances that use shared storage, such as Red Hat Ceph Storage. This latency arises because OpenStack Compute (nova) migrates local disks block-by-block between the Compute nodes over the control plane network by default. By contrast, volume-backed instances that use shared storage, such as Red Hat Ceph Storage, do not have to migrate the volumes, because each Compute node already has access to the shared storage. Note Network congestion in the control plane network caused by migrating local disks or instances that consume large amounts of RAM might impact the performance of other systems that use the control plane network, such as RabbitMQ. Read-only drive migration constraints Migrating a drive is supported only if the drive has both read and write capabilities. For example, OpenStack Compute (nova) cannot migrate a CD-ROM drive or a read-only config drive. However, OpenStack Compute (nova) can migrate a drive with both read and write capabilities, including a config drive with a drive format such as vfat . Live migration constraints In some cases, live migrating instances involves additional constraints. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwidth, memory bandwidth, storage IO, or CPU performance during live migration. No new operations during migration To achieve state consistency between the copies of the instance on the source and destination nodes, RHOSP must prevent new operations during live migration. Otherwise, live migration might take a long time or potentially never end if writes to memory occur faster than live migration can replicate the state of the memory. CPU pinning with NUMA NovaSchedulerDefaultFilters parameter in the Compute configuration must include the values AggregateInstanceExtraSpecsFilter and NUMATopologyFilter . Multi-cell clouds In a multi-cell cloud, instances can be live migrated to a different host in the same cell, but not across cells. Floating instances When live migrating floating instances, if the configuration of NovaComputeCpuSharedSet on the destination Compute node is different from the configuration of NovaComputeCpuSharedSet on the source Compute node, the instances will not be allocated to the CPUs configured for shared (unpinned) instances on the destination Compute node. Therefore, if you need to live migrate floating instances, you must configure all the Compute nodes with the same CPU mappings for dedicated (pinned) and shared (unpinned) instances, or use a host aggregate for the shared instances. Destination Compute node capacity The destination Compute node must have sufficient capacity to host the instance that you want to migrate. SR-IOV live migration Instances with SR-IOV-based network interfaces can be live migrated. Live migrating instances with direct mode SR-IOV network interfaces incurs network downtime. This is because the direct mode interfaces need to be detached and re-attached during the migration. Packet loss on ML2/OVN deployments ML2/OVN does not support live migration without packet loss. This is because OVN cannot handle multiple port bindings and therefore does not know when a port is being migrated. To minimize package loss during live migration, configure your ML2/OVN deployment to announce the instance on the destination host once migration is complete: Live migration on ML2/OVS deployments During the live migration process, when the virtual machine is unpaused in the destination host, the metadata service might not be available because the metadata server proxy has not yet spawned. This unavailability is brief. The service becomes available momentarily and the live migration succeeds. Constraints that preclude live migration You cannot live migrate an instance that uses the following features. PCI passthrough QEMU/KVM hypervisors support attaching PCI devices on the Compute node to an instance. Use PCI passthrough to give an instance exclusive access to PCI devices, which appear and behave as if they are physically attached to the operating system of the instance. However, because PCI passthrough involves direct access to the physical devices, QEMU/KVM does not support live migration of instances using PCI passthrough. Port resource requests You cannot live migrate an instance that uses a port that has resource requests, such as a guaranteed minimum bandwidth QoS policy. Use the following command to check if a port has resource requests: 16.3.3. Preparing to migrate Before you migrate one or more instances, you need to determine the Compute node names and the IDs of the instances to migrate. Procedure Identify the source Compute node host name and the destination Compute node host name: List the instances on the source Compute node and locate the ID of the instance or instances that you want to migrate: Replace <source> with the name or ID of the source Compute node. Optional: If you are migrating instances from a source Compute node to perform maintenance on the node, you must disable the node to prevent the scheduler from assigning new instances to the node during maintenance: Replace <source> with the host name of the source Compute node. You are now ready to perform the migration. Follow the required procedure detailed in Cold migrating an instance or Live migrating an instance . 16.3.4. Cold migrating an instance Cold migrating an instance involves stopping the instance and moving it to another Compute node. Cold migration facilitates migration scenarios that live migrating cannot facilitate, such as migrating instances that use PCI passthrough. The scheduler automatically selects the destination Compute node. For more information, see Migration constraints . Procedure To cold migrate an instance, enter the following command to power off and move the instance: Replace <instance> with the name or ID of the instance to migrate. Specify the --block-migration flag if migrating a locally stored volume. Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance: A status of "VERIFY_RESIZE" indicates you need to confirm or revert the migration: If the migration worked as expected, confirm it: Replace <instance> with the name or ID of the instance to migrate. A status of "ACTIVE" indicates that the instance is ready to use. If the migration did not work as expected, revert it: Replace <instance> with the name or ID of the instance. Restart the instance: Replace <instance> with the name or ID of the instance. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 16.3.5. Live migrating an instance Live migration moves an instance from a source Compute node to a destination Compute node with a minimal amount of downtime. Live migration might not be appropriate for all instances. For more information, see Migration constraints . Procedure To live migrate an instance, specify the instance and the destination Compute node: Replace <instance> with the name or ID of the instance. Replace <dest> with the name or ID of the destination Compute node. Note The openstack server migrate command covers migrating instances with shared storage, which is the default. Specify the --block-migration flag to migrate a locally stored volume: Confirm that the instance is migrating: Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance to confirm if the migration was successful: Replace <dest> with the name or ID of the destination Compute node. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 16.3.6. Checking migration status Migration involves several state transitions before migration is complete. During a healthy migration, the migration state typically transitions as follows: Queued: The Compute service has accepted the request to migrate an instance, and migration is pending. Preparing: The Compute service is preparing to migrate the instance. Running: The Compute service is migrating the instance. Post-migrating: The Compute service has built the instance on the destination Compute node and is releasing resources on the source Compute node. Completed: The Compute service has completed migrating the instance and finished releasing resources on the source Compute node. Procedure Retrieve the list of migration IDs for the instance: Replace <instance> with the name or ID of the instance. Show the status of the migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Running the nova server-migration-show command returns the following example output: Tip The OpenStack Compute service measures progress of the migration by the number of remaining memory bytes to copy. If this number does not decrease over time, the migration might be unable to complete, and the Compute service might abort it. Sometimes instance migration can take a long time or encounter errors. For more information, see Troubleshooting migration . 16.3.7. Evacuating an instance If you want to move an instance from a dead or shut-down Compute node to a new host in the same environment, you can evacuate it. The evacuate process destroys the original instance and rebuilds it on another Compute node using the original image, instance name, UUID, network addresses, and any other resources the original instance had allocated to it. If the instance uses shared storage, the instance root disk is not rebuilt during the evacuate process, as the disk remains accessible by the destination Compute node. If the instance does not use shared storage, then the instance root disk is also rebuilt on the destination Compute node. Note You can only perform an evacuation when the Compute node is fenced, and the API reports that the state of the Compute node is "down" or "forced-down". If the Compute node is not reported as "down" or "forced-down", the evacuate command fails. To perform an evacuation, you must be a cloud administrator. 16.3.7.1. Evacuating one instance You can evacuate instances one at a time. Procedure Confirm that the instance is not running: Replace <node> with the name or UUID of the Compute node that hosts the instance. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instance to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instance from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate the instance: Optional: Replace <pass> with the administrative password required to access the evacuated instance. If a password is not specified, a random password is generated and output when the evacuation is complete. Note The password is changed only when ephemeral instance disks are stored on the local hypervisor disk. The password is not changed if the instance is hosted on shared storage or has a Block Storage volume attached, and no error message is displayed to inform you that the password was not changed. Replace <instance> with the name or ID of the instance to evacuate. Optional: Replace <dest> with the name of the Compute node to evacuate the instance to. If you do not specify the destination Compute node, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. 16.3.7.2. Evacuating all instances on a host You can evacuate all instances on a specified Compute node. Procedure Confirm that the instances to evacuate are not running: Replace <node> with the name or UUID of the Compute node that hosts the instances to evacuate. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instances to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instances from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate all instances on a specified Compute node: Optional: Replace <dest> with the name of the destination Compute node to evacuate the instances to. If you do not specify the destination, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Replace <node> with the name of the Compute node to evacuate the instances from. Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. 16.3.8. Troubleshooting migration The following issues can arise during instance migration: The migration process encounters errors. The migration process never ends. Performance of the instance degrades after migration. 16.3.8.1. Errors during migration The following issues can send the migration operation into an error state: Running a cluster with different versions of Red Hat OpenStack Platform (RHOSP). Specifying an instance ID that cannot be found. The instance you are trying to migrate is in an error state. The Compute service is shutting down. A race condition occurs. Live migration enters a failed state. When live migration enters a failed state, it is typically followed by an error state. The following common issues can cause a failed state: A destination Compute host is not available. A scheduler exception occurs. The rebuild process fails due to insufficient computing resources. A server group check fails. The instance on the source Compute node gets deleted before migration to the destination Compute node is complete. 16.3.8.2. Never-ending live migration Live migration can fail to complete, which leaves migration in a perpetual running state. A common reason for a live migration that never completes is that client requests to the instance running on the source Compute node create changes that occur faster than the Compute service can replicate them to the destination Compute node. Use one of the following methods to address this situation: Abort the live migration. Force the live migration to complete. Aborting live migration If the instance state changes faster than the migration procedure can copy it to the destination node, and you do not want to temporarily suspend the instance operations, you can abort the live migration. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Abort the live migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Forcing live migration to complete If the instance state changes faster than the migration procedure can copy it to the destination node, and you want to temporarily suspend the instance operations to force migration to complete, you can force the live migration procedure to complete. Important Forcing live migration to complete might lead to perceptible downtime. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Force the live migration to complete: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. 16.3.8.3. Instance performance degrades after migration For instances that use a NUMA topology, the source and destination Compute nodes must have the same NUMA topology and configuration. The NUMA topology of the destination Compute node must have sufficient resources available. If the NUMA configuration between the source and destination Compute nodes is not the same, it is possible that live migration succeeds while the instance performance degrades. For example, if the source Compute node maps NIC 1 to NUMA node 0, but the destination Compute node maps NIC 1 to NUMA node 5, after migration the instance might route network traffic from a first CPU across the bus to a second CPU with NUMA node 5 to route traffic to NIC 1. This can result in expected behavior, but degraded performance. Similarly, if NUMA node 0 on the source Compute node has sufficient available CPU and RAM, but NUMA node 0 on the destination Compute node already has instances using some of the resources, the instance might run correctly but suffer performance degradation. For more information, see Migration constraints .
[ "[stack@director ~]USD source ~/stackrc", "parameter_defaults: NovaVNCProxySSLMinimumVersion: <version>", "parameter_defaults: NovaVNCProxySSLCiphers: <ciphers>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "parameter_defaults: NovaCronArchiveDeleteRowsPurge: True", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: workarounds/enable_qemu_monitor_announce_self: value: 'True'", "openstack port show <port_name/port_id>", "(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list", "(overcloud)USD openstack server list --host <source> --all-projects", "(overcloud)USD openstack compute service set <source> nova-compute --disable", "(overcloud)USD openstack server migrate <instance> --wait", "(overcloud)USD openstack server list --all-projects", "(overcloud)USD openstack server resize --confirm <instance>", "(overcloud)USD openstack server resize --revert <instance>", "(overcloud)USD openstack server start <instance>", "(overcloud)USD openstack compute service set <source> nova-compute --enable", "(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait", "(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait --block-migration", "(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+", "(overcloud)USD openstack server list --host <dest> --all-projects", "(overcloud)USD openstack compute service set <source> nova-compute --enable", "nova server-migration-list <instance> +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...)", "nova server-migration-show <instance> <migration_id>", "+------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+", "(overcloud)USD openstack server list --host <node> --all-projects", "(overcloud)[stack@director ~]USD openstack baremetal node show <node>", "(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>", "(overcloud)[stack@director ~]USD nova evacuate [--password <pass>] <instance> [<dest>]", "(overcloud)[stack@director ~]USD openstack hypervisor list", "(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --enable", "(overcloud)USD openstack server list --host <node> --all-projects", "(overcloud)[stack@director ~]USD openstack baremetal node show <node>", "(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>", "(overcloud)[stack@director ~]USD nova host-evacuate [--target_host <dest>] <node>", "(overcloud)[stack@director ~]USD openstack hypervisor list", "(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --enable", "nova server-migration-list <instance>", "nova live-migration-abort <instance> <migration_id>", "nova server-migration-list <instance>", "nova live-migration-force-complete <instance> <migration_id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_managing-instances_managing-instances
25.5. Configuring a Fibre Channel over Ethernet Interface
25.5. Configuring a Fibre Channel over Ethernet Interface Setting up and deploying a Fibre Channel over Ethernet (FCoE) interface requires two packages: fcoe-utils lldpad Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN (VLAN): Procedure 25.10. Configuring an Ethernet Interface to Use FCoE To configure a new VLAN, make a copy of an existing network script, for example /etc/fcoe/cfg-eth0 , and change the name to the Ethernet device that supports FCoE. This provides you with a default file to configure. Given that the FCoE device is eth X , run: Modify the contents of cfg-eth X as needed. Notably, set DCB_REQUIRED to no for networking interfaces that implement a hardware Data Center Bridging Exchange (DCBX) protocol client. If you want the device to automatically load during boot time, set ONBOOT=yes in the corresponding /etc/sysconfig/network-scripts/ifcfg-eth X file. For example, if the FCoE device is eth2, edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly. Start the data center bridging daemon ( dcbd ) by running: For networking interfaces that implement a hardware DCBX client, skip this step. For interfaces that require a software DCBX client, enable data center bridging on the Ethernet interface by running: Then, enable FCoE on the Ethernet interface by running: Note that these commands only work if the dcbd settings for the Ethernet interface were not changed. Load the FCoE device now using: Start FCoE using: The FCoE device appears soon if all other settings on the fabric are correct. To view configured FCoE devices, run: After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE and the lldpad service to run at startup. To do so, use the systemctl utility: Note Running the # systemctl stop fcoe command stops the daemon, but does not reset the configuration of FCoE interfaces. To do so, run the # systemctl -s SIGHUP kill fcoe command. As of Red Hat Enterprise Linux 7, Network Manager has the ability to query and set the DCB settings of a DCB capable Ethernet interface.
[ "cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth X", "systemctl start lldpad", "dcbtool sc eth X dcb on", "dcbtool sc eth X app:fcoe e:1", "ip link set dev eth X up", "systemctl start fcoe", "fcoeadm -i", "systemctl enable lldpad", "systemctl enable fcoe" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/fcoe-config
Creating and Managing Images
Creating and Managing Images Red Hat OpenStack Platform 16.2 Creating and Managing Images OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_images/index
Chapter 23. Configuring a custom PKI
Chapter 23. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 23.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 23.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 23.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Important After adding a config.openshift.io/inject-trusted-cabundle="true" label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle as data. You must use a separate config map to store service-ca.crt by using the service.beta.openshift.io/inject-cabundle=true annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true" label and service.beta.openshift.io/inject-cabundle=true annotation on the same config map can cause issues. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path.
[ "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/configuring-a-custom-pki
Generating a custom LLM using RHEL AI
Generating a custom LLM using RHEL AI Red Hat Enterprise Linux AI 1.3 Using SDG, training, and evaluation to create a custom LLM Red Hat RHEL AI Documentation Team
[ "ilab data generate", "ilab data generate --num-cpus 4", "Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120", "INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 73/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 74/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help.", "INFO 2024-08-16 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-08-16 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s", "ls ~/.local/share/instructlab/datasets/", "knowledge_recipe_2024-08-13T20_54_21.yaml skills_recipe_2024-08-13T20_54_21.yaml knowledge_train_msgs_2024-08-13T20_54_21.jsonl skills_train_msgs_2024-08-13T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-08-13T20_54_21.jsonl node_datasets_2024-08-13T15_12_12/", "cat ~/.local/share/datasets/<jsonl-dataset>", "{\"messages\":[{\"content\":\"I am, Red Hat\\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.\",\"role\":\"system\"},{\"content\":\"<|user|>\\n### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\\/Blanpain](289P\\/Blanpain \\\"wikilink\\\"). It peaks around 45 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\\n\\nHow many light years wide is the Phoenix cluster?\\n<|assistant|>\\n' 'The Phoenix cluster is around 7.3 million light years wide.'\",\"role\":\"pretraining\"}],\"metadata\":\"{\\\"sdg_document\\\": \\\"### Deep-sky objects\\\\n\\\\nThe constellation does not lie on the [galactic\\\\nplane](galactic_plane \\\\\\\"wikilink\\\\\\\") of the Milky Way, and there are no\\\\nprominent star clusters. [NGC 625](NGC_625 \\\\\\\"wikilink\\\\\\\") is a dwarf\\\\n[irregular galaxy](irregular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude\\\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\\\nyears in diameter, it is an outlying member of the [Sculptor\\\\nGroup](Sculptor_Group \\\\\\\"wikilink\\\\\\\"). NGC 625 is thought to have been\\\\ninvolved in a collision and is experiencing a burst of [active star\\\\nformation](Active_galactic_nucleus \\\\\\\"wikilink\\\\\\\"). [NGC\\\\n37](NGC_37 \\\\\\\"wikilink\\\\\\\") is a [lenticular\\\\ngalaxy](lenticular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude 14.66. It is\\\\napproximately 42 [kiloparsecs](kiloparsecs \\\\\\\"wikilink\\\\\\\") (137,000\\\\n[light-years](light-years \\\\\\\"wikilink\\\\\\\")) in diameter and about 12.9\\\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\\\\\"wikilink\\\\\\\")\\\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\\\\\"wikilink\\\\\\\"), and three\\\\nspiral galaxies [NGC 88](NGC_88 \\\\\\\"wikilink\\\\\\\"), [NGC 89](NGC_89 \\\\\\\"wikilink\\\\\\\")\\\\nand [NGC 92](NGC_92 \\\\\\\"wikilink\\\\\\\")) is a group of four galaxies located\\\\naround 160 million light-years away which are in the process of\\\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\\\n243-49 is [HLX-1](HLX-1 \\\\\\\"wikilink\\\\\\\"), an [intermediate-mass black\\\\nhole](intermediate-mass_black_hole \\\\\\\"wikilink\\\\\\\")\\the first one of its kind\\\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\\\nabsorbed in a [collision](Interacting_galaxy \\\\\\\"wikilink\\\\\\\") with ESO\\\\n243-49. Before its discovery, this class of black hole was only\\\\nhypothesized.\\\\n\\\\nLying within the bounds of the constellation is the gigantic [Phoenix\\\\ncluster](Phoenix_cluster \\\\\\\"wikilink\\\\\\\"), which is around 7.3 million light\\\\nyears wide and 5.7 billion light years away, making it one of the most\\\\nmassive [galaxy clusters](galaxy_cluster \\\\\\\"wikilink\\\\\\\"). It was first\\\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\\\nnew stars a year. Larger still is [El\\\\nGordo](El_Gordo_(galaxy_cluster) \\\\\\\"wikilink\\\\\\\"), or officially ACT-CL\\\\nJ0102-4915, whose discovery was announced in 2012. Located around\\\\n7.2 billion light years away, it is composed of two subclusters in the\\\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\\\nX-rays and infrared images.\\\\n\\\\n### Meteor showers\\\\n\\\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\\\\\"wikilink\\\\\\\") of two\\\\nannual [meteor showers](meteor_shower \\\\\\\"wikilink\\\\\\\"). The\\\\n[Phoenicids](Phoenicids \\\\\\\"wikilink\\\\\\\"), also known as the December\\\\nPhoenicids, were first observed on 3 December 1887. The shower was\\\\nparticularly intense in December 1956, and is thought related to the\\\\nbreakup of the [short-period comet](short-period_comet \\\\\\\"wikilink\\\\\\\")\\\\n[289P\\/Blanpain](289P\\/Blanpain \\\\\\\"wikilink\\\\\\\"). It peaks around 4\\5 December,\\\\nthough is not seen every year. A very minor meteor shower peaks\\\\naround July 14 with around one meteor an hour, though meteors can be\\\\nseen anytime from July 3 to 18; this shower is referred to as the July\\\\nPhoenicids.\\\", \\\"domain\\\": \\\"astronomy\\\", \\\"dataset\\\": \\\"document_knowledge_qa\\\"}\",\"id\":\"1df7c219-a062-4511-8bae-f55c88927dc1\"}", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Training Phase 1/2 TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "Training Phase 2/2 TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "MT-Bench evaluation for Phase 2 Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1", "Training finished! Best final checkpoint: samples_1945 with score: 6.813759384", "ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/", "samples_1711 samples_1945 samples_1456 samples_1462 samples_1903", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y", "ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter", "KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04)", "ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch <worker-branch>", "SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5)", "ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665", "KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46", "ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665", "SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05", "ilab model serve --model-path <path-to-best-performed-checkpoint>", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.", "ilab model chat --model <path-to-best-performed-checkpoint-file>", "ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945", "ilab model chat โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ >>> [S][default]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html-single/generating_a_custom_llm_using_rhel_ai/index