title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Appendix G. Defining allowed CPU types in Self-hosted Engine deployment
Appendix G. Defining allowed CPU types in Self-hosted Engine deployment Procedure Create a file named deploy.json , and from the table shown below, select a CPU type for the he_cluster_cpu_type . For example, if the CPU type you want is Secure Intel Nehalem Family , then the deploy.json should look like the following: Provide the deploy.json file to the hosted-engine --deploy process. Table G.1. Allowed CPU Types CPU type name CPU properties Intel Nehalem Family vmx,nx,model_Nehalem:Nehalem:x86_64 Secure Intel Nehalem Family vmx,spec_ctrl,ssbd,model_Nehalem:Nehalem,+spec-ctrl,+ssbd:x86_64 Intel Westmere Family aes,vmx,nx,model_Westmere:Westmere:x86_64 Secure Intel Westmere Family aes,vmx,spec_ctrl,ssbd,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd:x8_64 Intel SandyBridge Family vmx,nx,model_SandyBridge:SandyBridge:x86_64 Secure Intel SandyBridge Family vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64 Intel IvyBridge Family vmx,nx,model_IvyBridge:IvyBridge:x86_64 Secure Intel IvyBridge Family vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64 Intel Haswell Family vmx,nx,model_Haswell-noTSX:Haswell-noTSX:x86_64 Secure Intel Haswell Family vmx,spec_ctrl,ssbd,md_clear,model_Haswell-noTSX:Haswell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64 Intel Broadwell Family vmx,nx,model_Broadwell-noTSX:Broadwell-noTSX:x86_64 Secure Intel Broadwell Family vmx,spec_ctrl,ssbd,md_clear,model_Broadwell-noTSX:Broadwell-noTSX,+spec-ctrl,+ssbd,+md-clear:x86_64 Intel Skylake Client Family vmx,nx,model_Skylake-Client:Skylake-Client,-hle,-rtm,-mpx:x86_64 Secure Intel Skylake Client Family vmx,ssbd,md_clear,model_Skylake-Client-noTSX-IBRS:Skylake-Client-noTSX-IBRS,+ssbd,+md-clear,-mpx:x86_64 Intel Skylake Server Family vmx,nx,model_Skylake-Server:Skylake-Server,-hle,-rtm,-mpx:x86_64 Secure Intel Skylake Server Family vmx,ssbd,md_clear,model_Skylake-Server-noTSX-IBRS:Skylake-Server-noTSX-IBRS,+ssbd,+md-clear,-mpx:x86_64 Intel Cascadelake Server Family vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,-mpx:x86_64 Secure Intel Cascadelake Server Family vmx,model_Cascadelake-Server-noTSX:Cascadelake-Server-noTSX,-mpx:x86_64 Intel Icelake Server Family vmx,model_Icelake-Server-noTSX:Icelake-Server-noTSX,-mpx:x86_64 Secure Intel Icelake Server Family vmx,arch-capabilities,rdctl-no,ibrs-all,skip-l1dfl-vmentry,mds-no,pschange-mc-no,taa-no,model_Icelake-Server-noTSX:Icelake-Server-noTSX,+arch-capabilities,+rdctl-no,+ibrs-all,+skip-l1dfl-vmentry,+mds-no,+pschange-mc-no,+taa-no,-mpx:x86_64 AMD Opteron G4 svm,nx,model_Opteron_G4:Opteron_G4:x86_64 AMD Opteron G5 svm,nx,model_Opteron_G5:Opteron_G5:x86_64 AMD EPYC svm,nx,model_EPYC:EPYC:x86_64 Secure AMD EPYC svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64 IBM POWER8 powernv,model_POWER8:POWER8:ppc64 IBM POWER9 powernv,model_POWER9:POWER9:ppc64 IBM z114, z196 sie,model_z196-base:z196-base:s390x IBM zBC12, zEC12 sie,model_zEC12-base:zEC12-base:s390x IBM z13s, z13 sie,model_z13-base:z13-base:s390x IBM z14 sie,model_z14-base:z14-base:s390x
[ "cat deploy.json { \"he_cluster_cpu_type\": \"Secure Intel Nehalem Family\" }", "hosted-engine --deploy --ansible-extra-vars=@/root/deploy.json" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/proc-proc-defining_allowed_cpu_migrate_dwh_db
Chapter 12. EgressRouter [network.operator.openshift.io/v1]
Chapter 12. EgressRouter [network.operator.openshift.io/v1] Description EgressRouter is a feature allowing the user to define an egress router that acts as a bridge between pods and external systems. The egress router runs a service that redirects egress traffic originating from a pod or a group of pods to a remote external system or multiple destinations as per configuration. It is consumed by the cluster-network-operator. More specifically, given an EgressRouter CR with <name>, the CNO will create and manage: - A service called <name> - An egress pod called <name> - A NAD called <name> Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). EgressRouter is a single egressrouter pod configuration object. Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired egress router. status object Observed status of EgressRouter. 12.1.1. .spec Description Specification of the desired egress router. Type object Required addresses mode networkInterface Property Type Description addresses array List of IP addresses to configure on the pod's secondary interface. addresses[] object EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface mode string Mode depicts the mode that is used for the egress router. The default mode is "Redirect" and is the only supported mode currently. networkInterface object Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. redirect object Redirect represents the configuration parameters specific to redirect mode. 12.1.2. .spec.addresses Description List of IP addresses to configure on the pod's secondary interface. Type array 12.1.3. .spec.addresses[] Description EgressRouterAddress contains a pair of IP CIDR and gateway to be configured on the router's interface Type object Required ip Property Type Description gateway string IP address of the -hop gateway, if it cannot be automatically determined. Can be IPv4 or IPv6. ip string IP is the address to configure on the router's interface. Can be IPv4 or IPv6. 12.1.4. .spec.networkInterface Description Specification of interface to create/use. The default is macvlan. Currently only macvlan is supported. Type object Property Type Description macvlan object Arguments specific to the interfaceType macvlan 12.1.5. .spec.networkInterface.macvlan Description Arguments specific to the interfaceType macvlan Type object Required mode Property Type Description master string Name of the master interface. Need not be specified if it can be inferred from the IP address. mode string Mode depicts the mode that is used for the macvlan interface; one of Bridge|Private|VEPA|Passthru. The default mode is "Bridge". 12.1.6. .spec.redirect Description Redirect represents the configuration parameters specific to redirect mode. Type object Property Type Description fallbackIP string FallbackIP specifies the remote destination's IP address. Can be IPv4 or IPv6. If no redirect rules are specified, all traffic from the router are redirected to this IP. If redirect rules are specified, then any connections on any other port (undefined in the rules) on the router will be redirected to this IP. If redirect rules are specified and no fallback IP is provided, connections on other ports will simply be rejected. redirectRules array List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. redirectRules[] object L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. 12.1.7. .spec.redirect.redirectRules Description List of L4RedirectRules that define the DNAT redirection from the pod to the destination in redirect mode. Type array 12.1.8. .spec.redirect.redirectRules[] Description L4RedirectRule defines a DNAT redirection from a given port to a destination IP and port. Type object Required destinationIP port protocol Property Type Description destinationIP string IP specifies the remote destination's IP address. Can be IPv4 or IPv6. port integer Port is the port number to which clients should send traffic to be redirected. protocol string Protocol can be TCP, SCTP or UDP. targetPort integer TargetPort allows specifying the port number on the remote destination to which the traffic gets redirected to. If unspecified, the value from "Port" is used. 12.1.9. .status Description Observed status of EgressRouter. Type object Required conditions Property Type Description conditions array Observed status of the egress router conditions[] object EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. 12.1.10. .status.conditions Description Observed status of the egress router Type array 12.1.11. .status.conditions[] Description EgressRouterStatusCondition represents the state of the egress router's managed and monitored components. Type object Required status type Property Type Description lastTransitionTime `` LastTransitionTime is the time of the last update to the current status property. message string Message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string Reason is the CamelCase reason for the condition's current status. status string Status of the condition, one of True, False, Unknown. type string Type specifies the aspect reported by this condition; one of Available, Progressing, Degraded 12.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/egressrouters GET : list objects of kind EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters DELETE : delete collection of EgressRouter GET : list objects of kind EgressRouter POST : create an EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} DELETE : delete an EgressRouter GET : read the specified EgressRouter PATCH : partially update the specified EgressRouter PUT : replace the specified EgressRouter /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status GET : read status of the specified EgressRouter PATCH : partially update status of the specified EgressRouter PUT : replace status of the specified EgressRouter 12.2.1. /apis/network.operator.openshift.io/v1/egressrouters HTTP method GET Description list objects of kind EgressRouter Table 12.1. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty 12.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters HTTP method DELETE Description delete collection of EgressRouter Table 12.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressRouter Table 12.3. HTTP responses HTTP code Reponse body 200 - OK EgressRouterList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressRouter Table 12.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.5. Body parameters Parameter Type Description body EgressRouter schema Table 12.6. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 202 - Accepted EgressRouter schema 401 - Unauthorized Empty 12.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name} Table 12.7. Global path parameters Parameter Type Description name string name of the EgressRouter HTTP method DELETE Description delete an EgressRouter Table 12.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressRouter Table 12.10. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressRouter Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.12. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressRouter Table 12.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.14. Body parameters Parameter Type Description body EgressRouter schema Table 12.15. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty 12.2.4. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/egressrouters/{name}/status Table 12.16. Global path parameters Parameter Type Description name string name of the EgressRouter HTTP method GET Description read status of the specified EgressRouter Table 12.17. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressRouter Table 12.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.19. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressRouter Table 12.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.21. Body parameters Parameter Type Description body EgressRouter schema Table 12.22. HTTP responses HTTP code Reponse body 200 - OK EgressRouter schema 201 - Created EgressRouter schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/egressrouter-network-operator-openshift-io-v1
Part III. The Red Hat build of OptaPlanner solver
Part III. The Red Hat build of OptaPlanner solver Solving a planning problem with OptaPlanner consists of the following steps: Model your planning problem as a class annotated with the @PlanningSolution annotation (for example, the NQueens class). Configure a Solver (for example a First Fit and Tabu Search solver for any NQueens instance). Load a problem data set from your data layer (for example a Four Queens instance). That is the planning problem. Solve it with Solver.solve(problem) , which returns the best solution found.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-planner-configuration
Chapter 5. Technology Previews
Chapter 5. Technology Previews Technology Preview features included with AMQ Streams 2.5. Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope . 5.1. KRaft mode Apache Kafka is in the process of phasing out the need for ZooKeeper. You can now try deploying a Kafka cluster in KRaft (Kafka Raft metadata) mode without ZooKeeper as a technology preview. Caution This mode is intended only for development and testing, and must not be enabled for a production environment. Currently, the KRaft mode in AMQ Streams has the following major limitations: Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported. Upgrades and downgrades of Apache Kafka versions are not supported. JBOD storage with multiple disks is not supported. Many configuration options are still in development. See Running Kafka in KRaft mode . 5.2. Kafka Static Quota plugin configuration Use the technology preview of the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. Example Kafka Static Quota plugin configuration client.quota.callback.class= io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce= 1000000 client.quota.callback.static.fetch= 1000000 client.quota.callback.static.storage.soft= 400000000000 client.quota.callback.static.storage.hard= 500000000000 client.quota.callback.static.storage.check-interval= 5 See Setting limits on brokers using the Kafka Static Quota plugin .
[ "client.quota.callback.class= io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce= 1000000 client.quota.callback.static.fetch= 1000000 client.quota.callback.static.storage.soft= 400000000000 client.quota.callback.static.storage.hard= 500000000000 client.quota.callback.static.storage.check-interval= 5" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_rhel/tech-preview-str
13.9. Language Support
13.9. Language Support To install support for additional locales and language dialects, select Language Support from the Installation Summary screen. Use your mouse to select the language for which you would like to install support. In the left panel, select your language of choice, for example Espanol . Then you can select a locale specific to your region in the right panel, for example Espanol (Costa Rica) . You can select multiple languages and multiple locales. The selected languages are highlighted in bold in the left panel. Figure 13.6. Configuring Language Support Once you have made your selections, click Done to return to the Installation Summary screen. Note To change your language support configuration after you have completed the installation, visit the Region & Language section of the Settings dialog window.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-language-support-ppc
Chapter 13. Concepts for configuring thread pools
Chapter 13. Concepts for configuring thread pools This section is intended when you want to understand the considerations and best practices on how to configure thread pools connection pools for Red Hat build of Keycloak. For a configuration where this is applied, visit Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator . 13.1. Concepts 13.1.1. Quarkus executor pool Red Hat build of Keycloak requests, as well as blocking probes, are handled by an executor pool. Depending on the available CPU cores, it has a maximum size of 200 or more threads. Threads are created as needed, and will end when no longer needed, so the system will scale up and down automatically. Red Hat build of Keycloak allows configuring the maximum thread pool size by the http-pool-max-threads configuration option. See Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator for an example. When running on Kubernetes, adjust the number of worker threads to avoid creating more load than what the CPU limit allows for the Pod to avoid throttling, which would lead to congestion. When running on physical machines, adjust the number of worker threads to avoid creating more load than the node can handle to avoid congestion. Congestion would result in longer response times and an increased memory usage, and eventually an unstable system. Ideally, you should start with a low limit of threads and adjust it accordingly to the target throughput and response time. When the load and the number of threads increases, the database connections can also become a bottleneck. Once a request cannot acquire a database connection within 5 seconds, it will fail with a message in the log like Unable to acquire JDBC Connection . The caller will receive a response with a 5xx HTTP status code indicating a server side error. If you increase the number of database connections and the number of threads too much, the system will be congested under a high load with requests queueing up, which leads to a bad performance. The number of database connections is configured via the Database settings db-pool-initial-size , db-pool-min-size and db-pool-max-size respectively. Low numbers ensure fast response times for all clients, even if there is an occasionally failing request when there is a load spike. 13.1.2. JGroups connection pool The combined number of executor threads in all Red Hat build of Keycloak nodes in the cluster should not exceed the number of threads available in JGroups thread pool to avoid the error org.jgroups.util.ThreadPool: thread pool is full . To see the error the first time it happens, the system property jgroups.thread_dumps_threshold needs to be set to 1 , as otherwise the message appears only after 10000 requests have been rejected. The number of JGroup threads is 200 by default. While it can be configured using the property Java system property jgroups.thread_pool.max_threads , we advise keeping it at this value. As shown in experiments, the total number of Quarkus worker threads in the cluster must not exceed the number of threads in the JGroup thread pool of 200 in each node to avoid deadlocks in the JGroups communication. Given a Red Hat build of Keycloak cluster with four Pods, each Pod should then have 50 Quarkus worker threads. Use the Red Hat build of Keycloak configuration option http-pool-max-threads to configure the maximum number of Quarkus worker threads. Use metrics to monitor the total JGroup threads in the pool and for the threads active in the pool. When using TCP as the JGroups transport protocol, the metrics vendor_jgroups_tcp_get_thread_pool_size and vendor_jgroups_tcp_get_thread_pool_size_active are available for monitoring. When using UDP, the metrics vendor_jgroups_udp_get_thread_pool_size and vendor_jgroups_udp_get_thread_pool_size_active are available. This is useful to monitor that limiting the Quarkus thread pool size keeps the number of active JGroup threads below the maximum JGroup thread pool size. 13.1.3. Load Shedding By default, Red Hat build of Keycloak will queue all incoming requests infinitely, even if the request processing stalls. This will use additional memory in the Pod, can exhaust resources in the load balancers, and the requests will eventually time out on the client side without the client knowing if the request has been processed. To limit the number of queued requests in Red Hat build of Keycloak, set an additional Quarkus configuration option. Configure http-max-queued-requests to specify a maximum queue length to allow for effective load shedding once this queue size is exceeded. Assuming a Red Hat build of Keycloak Pod processes around 200 requests per second, a queue of 1000 would lead to maximum waiting times of around 5 seconds. When this setting is active, requests that exceed the number of queued requests will return with an HTTP 503 error. Red Hat build of Keycloak logs the error message in its log. 13.1.4. Probes Red Hat build of Keycloak's liveness probe is non-blocking to avoid a restart of a Pod under a high load. The overall health probe and the readiness can probe in some cases block to check the connection to the database, so they might fail under a high load. Due to this, a Pod can become non-ready under a high load. 13.1.5. OS Resources In order for Java to create threads, when running on Linux it needs to have file handles available. Therefore, the number of open files (as retrieved as ulimit -n on Linux) need to provide head-space for Red Hat build of Keycloak to increase the number of threads needed. Each thread will also consume memory, and the container memory limits need to be set to a value that allows for this or the Pod will be killed by Kubernetes.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/concepts-threads-
Chapter 5. RHEL 8.3.0 release
Chapter 5. RHEL 8.3.0 release 5.1. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.3. 5.1.1. Installer and image creation Anaconda rebased to version 33.16 With this release, Anaconda has been rebased to version 33.16. This version provides the following notable enhancements over the version. The Installation Program now displays static IPv6 addresses on multiple lines and no longer resizes the windows. The Installation Program now displays supported NVDIMM device sector sizes. Host name is now configured correctly on an installed system having IPv6 static configuration. You can now use non-ASCII characters in disk encryption passphrase. The Installation Program displays a proper recommendation to create a new file system on /boot, /tmp, and all /var and /usr mount points except /usr/local and /var/www. The Installation Program now correctly checks the keyboard layout and does not change the status of the Keyboard Layout screen when the keyboard keys (ALT+SHIFT) are used to switch between different layouts and languages. Rescue mode no longer fails on systems with existing RAID1 partitions. Changing of the LUKS version of the container is now available in the Manual Partitioning screen. The Installation Program successfully finishes the installation without the btrfs-progs package. The Installation Program now uses the default LUKS2 version for an encrypted container. The Installation Program no longer crashes when a Kickstart file places physical volumes (PVs) of a Logical volume group (VG) on an ignoredisk list. Introduces a new mount path /mnt/sysroot for system root. This path is used to mount / of the target system. Usually, the physical root and the system root are the same, so /mnt/sysroot is attached to the same file system as /mnt/sysimage . The only exceptions are rpm-ostree systems, where the system root changes based on the deployment. Then, /mnt/sysroot is attached to a subdirectory of /mnt/sysimage . It is recommended to use /mnt/sysroot for chroot. ( BZ#1691319 , BZ#1679893 , BZ#1684045, BZ#1688478 , BZ#1700450, BZ#1720145, BZ#1723888, BZ#1754977 , BZ#1755996 , BZ#1784360 , BZ#1796310, BZ#1871680) GUI changes in RHEL Installation Program The RHEL Installation Program now includes the following user settings on the Installation Summary window: Root password User creation With this change, you can now configure a root password and create a user account before you begin the installation. Previously, you configured a root password and created a user account after you began the installation process. A root password is used to log in to the administrator (also known as superuser or root) account which is used for system administration tasks. The user name is used to log in from a command line; if you install a graphical environment, then your graphical login manager uses the full name. For more details, see Interactively installing RHEL from installation media document. (JIRA:RHELPLAN-40469) Image Builder backend osbuild-composer replaces lorax-composer The osbuild-composer backend replaces lorax-composer . The new service provides REST APIs for image building. As a result, users can benefit from a more reliable backend and more predictable output images. (BZ#1836211) Image Builder osbuild-composer supports a set of image types With the osbuild-composer backend replacement, the following set of image types supported in osbuild-composer this time: TAR Archive (.tar) QEMU QCOW2 (.qcow2) VMware Virtual Machine Disk (.vmdk) Amazon Machine Image (.ami) Azure Disk Image (.vhd) OpenStack Image (.qcow2) The following outputs are not supported this time: ext4-filesystem partitioned-disk Alibaba Cloud Google GCE (JIRA:RHELPLAN-42617) Image Builder now supports push to clouds through GUI With this enhancement, when creating images, users can choose the option of pushing to Azure and AWS service clouds through GUI Image Builder . As a result, users can benefit from easier uploads and instantiation. (JIRA:RHELPLAN-30878) 5.1.2. RHEL for Edge Introducing RHEL for Edge images With this release, you can now create customized RHEL images for Edge servers. You can use Image Builder to create RHEL for Edge images, and then use RHEL installer to deploy them on AMD and Intel 64-bit systems. Image Builder generates a RHEL for Edge image as rhel-edge-commit in a .tar file. A RHEL for Edge image is an rpm-ostree image that includes system packages for remotely installing RHEL on Edge servers. The system packages include: Base OS package Podman as the container engine You can customize the image to configure the OS content as per your requirements, and can deploy them on physical and virtual machines. With a RHEL for Edge image, you can achieve the following: Atomic upgrades, where the state of each update is known and no changes are seen until you reboot the device. Custom health checks using Greenboot and intelligent rollbacks for resiliency in case of failed upgrades. Container-focused workflows, where you can separate core OS updates from the application updates, and test and deploy different versions of applications. Optimized OTA payloads for low-bandwidth environments. Custom health checks using Greenboot to ensure resiliency. For more information about composing, installing, and managing RHEL for Edge images, see Composing, Installing, and Managing RHEL for Edge images . (JIRA:RHELPLAN-56676) 5.1.3. Software management The default value for the best dnf configuration option has been changed from True to False With this update, the value for the best dnf configuration option has been set to True in the default configuration file to retain the original dnf behavior. As a result, for users that use the default configuration file the behavior remains unchanged. If you provide your own configuration files, make sure that the best=True option is present to retain the original behavior. ( BZ#1832869 ) New --norepopath option for the dnf reposync command is now available Previously, the reposync command created a subdirectory under the --download-path directory for each downloaded repository by default. With this update, the --norepopath option has been introduced, and reposync does not create the subdirectory. As a result, the repository is downloaded directly into the directory specified by --download-path . This option is also present in the YUM v3 . ( BZ#1842285 ) Ability to enable and disable the libdnf plugins Previously, subscription checking was hardcoded into the RHEL version of the libdnf plug-ins. With this update, the microdnf utility can enable and disable the libdnf plug-ins, and subscription checking can now be disabled the same way as in DNF. To disable subscription checking, use the --disableplugin=subscription-manager command. To disable all plug-ins, use the --noplugins command. ( BZ#1781126 ) 5.1.4. Shells and command-line tools ReaR updates RHEL 8.3 introduces a number of updates to the Relax-and-Recover ( ReaR ) utility. Notable changes include: Support for the third-party Rubrik Cloud Data Management (CDM) as external backup software has been added. To use it, set the BACKUP option in the configuration file to CDM . Creation of a rescue image with a file larger than 4 GB on the IBM POWER, little endian architecture has been enabled. Disk layout created by ReaR no longer includes entries for Rancher 2 Longhorn iSCSI devices and file systems. (BZ#1743303) smartmontools rebased to version 7.1 The smartmontools package has been upgraded to version 7.1, which provides multiple bug fixes and enhancements. Notable changes include: HDD, SSD and USB additions to the drive database. New options -j and --json to enable JSON output mode. Workaround for the incomplete Log subpages response from some SAS SSDs. Improved handling of READ CAPACITY command. Various improvements for the decoding of the log pages. ( BZ#1671154 ) opencryptoki rebased to version 3.14.0 The opencryptoki packages have been upgraded to version 3.14.0, which provides multiple bug fixes and enhancements. Notable changes include: EP11 cryptographic service enhancements: Dilithium support Edwards-curve digital signature algorithm (EdDSA) support Support of Rivest-Shamir-Adleman optimal asymmetric encryption padding (RSA-OAEP) with non-SHA1 hash and mask generation function (MGF) Enhanced process and thread locking Enhanced btree and object locking Support for new IBM Z hardware z15 Support of multiple token instances for trusted platform module (TPM), IBM cryptographic architecture (ICA) and integrated cryptographic service facility (ICSF) Added a new tool p11sak , which lists the token keys in an openCryptoki token repository Added a utility to migrate a token repository to FIPS compliant encryption Fixed pkcsep11_migrate tool Minor fixes of the ICSF software (BZ#1780293) gpgme rebased to version 1.13.1. The gpgme packages have been upgraded to upstream version 1.13.1. Notable changes include: New context flags no-symkey-cache (has an effect when used with GnuPG 2.2.7 or later), request-origin (has an effect when used with GnuPG 2.2.6 or later), auto-key-locate , and trust-model have been introduced. New tool gpgme-json as native messaging server for web browsers has been added. As of now, the public key encryption and decryption is supported. New encryption API to support direct key specification including hidden recipients option and taking keys from a file has been introduced. This also allows the use of a subkey. ( BZ#1829822 ) 5.1.5. Infrastructure services powertop rebased to version 2.12 The powertop packages have been upgraded to version 2.12. Notable changes over the previously available version 2.11 include: Use of Device Interface Power Management (DIPM) for SATA link PM. Support for Intel Comet Lake mobile and desktop systems, the Skylake server, and the Atom-based Tremont architecture (Jasper Lake). (BZ#1783110) tuned rebased to version 2.14.0 The tuned packages have been upgraded to upstream version 2.14.0. Notable enhancements include: The optimize-serial-console profile has been introduced. Support for a post loaded profile has been added. The irqbalance plugin for handling irqbalance settings has been added. Architecture specific tuning for Marvell ThunderX and AMD based platforms has been added. Scheduler plugin has been extended to support cgroups-v1 for CPU affinity setting. ( BZ#1792264 ) tcpdump rebased to version 4.9.3 The tcpdump utility has been updated to version 4.9.3 to fix Common Vulnerabilities and Exposures (CVE). ( BZ#1804063 ) libpcap rebased to version 1.9.1 The libpcap packages have been updated to version 1.9.1 to fix Common Vulnerabilities and Exposures (CVE). ( BZ#1806422 ) iperf3 now supports sctp option on the client side With this enhancement, the user can use Stream Control Transmission Protocol (SCTP) instead of Transmission Control Protocol (TCP) on the client side of testing network throughput. The following options for iperf3 are now available on the client side of testing: --sctp --xbind --nstreams To obtain more information, see Client Specific Options in the iperf3 man page. (BZ#1665142) iperf3 now supports SSL With this enhancement, the user can use RSA authentication between the client and the server to restrict the connections to the server only to legitimate clients. The following options for iperf3 are now available on the server side: --rsa-private-key-path --authorized-users-path The following options for iperf3 are now available on the client side of communication: --username --rsa-public-key-path ( BZ#1700497 ) bind rebased to 9.11.20 The bind package has been upgraded to version 9.11.20, which provides multiple bug fixes and enhancements. Notable changes include: Increased reliability on systems with many CPU cores by fixing several race conditions. Detailed error reporting: dig and other tools can now print the Extended DNS Error (EDE) option, if it is present. Message IDs in inbound DNS Zone Transfer Protocol (AXFR) transfers are checked and logged, when they are inconsistent. (BZ#1818785) A new optimize-serial-console TuneD profile to reduce I/O to serial consoles by lowering the printk value With this update, a new optimize-serial-console TuneD profile is available. In some scenarios, kernel drivers can send large amounts of I/O operations to the serial console. Such behavior can cause temporary unresponsiveness while the I/O is written to the serial console. The optimize-serial-console profile reduces this I/O by lowering the printk value from the default of 7 4 1 7 to 4 4 1 7 . Users with a serial console who wish to make this change on their system can instrument their system as follows: As a result, users will have a lower printk value that persists across a reboot, which reduces the likelihood of system hangs. This TuneD profile reduces the amount of I/O written to the serial console by removing debugging information. If you need to collect this debugging information, you should ensure this profile is not enabled and that your printk value is set to 7 4 1 7 . To check the value of printk run: ( BZ#1840689 ) New TuneD profiles added for the AMD-based platforms In RHEL 8.3, the throughput-performance TuneD profile was updated to include tuning for the AMD-based platforms. There is no need to change any parameter manually and the tuning is automatically applied on the AMD system. The AMD Epyc Naples and Rome systems alters the following parameters in the default throughput-performance profile: sched_migration_cost_ns=5000000 and kernel.numa_balancing=0 With this enhancement, the system performance is improved by ~5%. (BZ#1746957) memcached rebased to version 1.5.22 The memcached packages have been upgraded to version 1.5.22. Notable changes over the version include: TLS has been enabled. The -o inline_ascii_response option has been removed. The -Y [authfile] option has been added along with authentication mode for the ASCII protocol. memcached can now recover its cache between restarts. New experimental meta commands have been added. Various performance improvements. ( BZ#1809536 ) 5.1.6. Security Cyrus SASL now supports channel bindings with the SASL/GSSAPI and SASL/GSS-SPNEGO plug-ins This update adds support for channel bindings with the SASL/GSSAPI and SASL/GSS-SPNEGO plug-ins. As a result, when used in the openldap libraries, this feature enables Cyrus SASL to maintain compatibility with and access to Microsoft Active Directory and Microsoft Windows systems which are introducing mandatory channel binding for LDAP connections. ( BZ#1817054 ) Libreswan rebased to 3.32 With this update, Libreswan has been rebased to upstream version 3.32, which includes several new features and bug fixes. Notable features include: Libreswan no longer requires separate FIPS 140-2 certification. Libreswan now implements the cryptographic recommendations of RFC 8247, and changes the preference from SHA-1 and RSA-PKCS v1.5 to SHA-2 and RSA-PSS. Libreswan supports XFRMi virtual ipsecXX interfaces that simplify writing firewall rules. Recovery of crashed and rebooted nodes in a full-mesh encryption network is improved. ( BZ#1820206 ) The libssh library has been rebased to version 0.9.4 The libssh library, which implements the SSH protocol, has been upgraded to version 0.9.4. This update includes bug fixes and enhancements, including: Added support for Ed25519 keys in PEM files. Added support for diffie-hellman-group14-sha256 key exchange algorithm. Added support for localuser in Match keyword in the libssh client configuration file. Match criteria keyword arguments are now case-sensitive (note that keywords are case-insensitive, but keyword arguments are case-sensitive) Fixed CVE-2019-14889 and CVE-2020-1730. Added support for recursively creating missing directories found in the path string provided for the known hosts file. Added support for OpenSSH keys in PEM files with comments and leading white spaces. Removed the OpenSSH server configuration inclusion from the libssh server configuration. ( BZ#1804797 ) gnutls rebased to 3.6.14 The gnutls packages have been rebased to upstream version 3.6.14. This version provides many bug fixes and enhancements, most notably: gnutls now rejects certificates with Time fields that contain invalid characters or formatting. gnutls now checks trusted CA certificates for minimum key sizes. When displaying an encrypted private key, the certtool utility no longer includes its plain text description. Servers using gnutls now advertise OCSP-stapling support. Clients using gnutls now send OCSP staples only on request. ( BZ#1789392 ) gnutls FIPS DH checks now conform with NIST SP 800-56A rev. 3 This update of the gnutls packages provides checks required by NIST Special Publication 800-56A Revision 3, sections 5.7.1.1 and 5.7.1.2, step 2. The change is necessary for future FIPS 140-2 certifications. As a result, gnutls now accept only 2048-bit or larger parameters from RFC 7919 and RFC 3526 during the Diffie-Hellman key exchange when operating in FIPS mode. ( BZ#1849079 ) gnutls now performs validations according to NIST SP 800-56A rev 3 This update of the gnutls packages adds checks required by NIST Special Publication 800-56A Revision 3, sections 5.6.2.2.2 and 5.6.2.1.3, step 2. The addition prepares gnutls for future FIPS 140-2 certifications. As a result, gnutls perform additional validation steps for generated and received public keys during the Diffie-Hellman key exchange when operating in FIPS mode. (BZ#1855803) update-crypto-policies and fips-mode-setup moved into crypto-policies-scripts The update-crypto-policies and fips-mode-setup scripts, which were previously included in the crypto-policies package, are now moved into a separate RPM subpackage crypto-policies-scripts . The package is automatically installed through the Recommends dependency on regular installations. This enables the ubi8/ubi-minimal image to avoid the inclusion of the Python language interpreter and thus reduces the image size. ( BZ#1832743 ) OpenSC rebased to version 0.20.0 The opensc package has been rebased to version 0.20.0 which addresses multiple bugs and security issues. Notable changes include: With this update, CVE-2019-6502 , CVE-2019-15946 , CVE-2019-15945 , CVE-2019-19480 , CVE-2019-19481 and CVE-2019-19479 security issues are fixed. The OpenSC module now supports the C_WrapKey and C_UnwrapKey functions. You can now use the facility to detect insertion and removal of card readers as expected. The pkcs11-tool utility now supports the CKA_ALLOWED_MECHANISMS attribute. This update allows default detection of the OsEID cards. The OpenPGP Card v3 now supports Elliptic Curve Cryptography (ECC). The PKCS#11 URI now truncates the reader name with ellipsis. ( BZ#1810660 ) stunnel rebased to version 5.56 With this update, the stunnel encryption wrapper has been rebased to upstream version 5.56, which includes several new features and bug fixes. Notable features include: New ticketKeySecret and ticketMacSecret options that control confidentiality and integrity protection of the issued session tickets. These options enable you to resume sessions on other nodes in a cluster. New curves option to control the list of elliptic curves in OpenSSL 1.1.0 and later. New ciphersuites option to control the list of permitted TLS 1.3 ciphersuites. Added sslVersion , sslVersionMin and sslVersionMax for OpenSSL 1.1.0 and later. ( BZ#1808365 ) libkcapi rebased to version 1.2.0 The libkcapi package has been rebased to upstream version 1.2.0, which includes minor changes. (BZ#1683123) setools rebased to 4.3.0 The setools package, which is a collection of tools designed to facilitate SELinux policy analysis, has been upgraded to version 4.3.0. This update includes bug fixes and enhancements, including: Revised sediff method for Type Enforcement (TE) rules, which significantly reduces memory and runtime issues. Added infiniband context support to seinfo , sediff , and apol . Added apol configuration for the location of the Qt assistant tool used to display online documentation. Fixed sediff issues with: Properties header displaying when not requested. Name comparison of type_transition files. Fixed permission of map socket sendto information flow direction. Added methods to the TypeAttribute class to make it a complete Python collection. Genfscon now looks up classes, rather than using fixed values which were dropped from libsepol . The setools package requires the following packages: setools-console setools-console-analyses setools-gui ( BZ#1820079 ) Individual CephFS files and directories can now have SELinux labels The Ceph File System (CephFS) has recently enabled storing SELinux labels in the extended attributes of files. Previously, all files in a CephFS volume were labeled with a single common label system_u:object_r:cephfs_t:s0 . With this enhancement, you can change the labels for individual files, and SELinux defines the labels of newly created files based on transition rules. Note that previously unlabeled files still have the system_u:object_r:cephfs_t:s0 label until explicitly changed. ( BZ#1823764 ) OpenSCAP rebased to version 1.3.3 The openscap packages have been upgraded to upstream version 1.3.3, which provides many bug fixes and enhancements over the version, most notably: Added the autotailor script that enables you to generate tailoring files using a command-line interface (CLI). Added the timezone part to the Extensible Configuration Checklist Description Format (XCCDF) TestResult start and end time stamps Added the yamlfilecontent independent probe as a draft implementation. Introduced the urn:xccdf:fix:script:kubernetes fix type in XCCDF. Added ability to generate the machineconfig fix. The oscap-podman tool can now detect ambiguous scan targets. The rpmverifyfile probe can now verify files from the /bin directory. Fixed crashes when complicated regexes are executed in the textfilecontent58 probe. Evaluation characteristics of the XCCDF report are now consistent with OVAL entities from the system_info probe. Fixed file-path pattern matching in offline mode in the textfilecontent58 probe. Fixed infinite recursion in the systemdunitdependency probe. ( BZ#1829761 ) SCAP Security Guide now provides a profile aligned with the CIS RHEL 8 Benchmark v1.0.0 With this update, the scap-security-guide packages provide a profile aligned with the CIS Red Hat Enterprise Linux 8 Benchmark v1.0.0. The profile enables you to harden the configuration of the system using the guidelines by the Center for Internet Security (CIS). As a result, you can configure and automate compliance of your RHEL 8 systems with CIS by using the CIS Ansible Playbook and the CIS SCAP profile. Note that the rpm_verify_permissions rule in the CIS profile does not work correctly. ( BZ#1760734 ) scap-security-guide now provides a profile that implements HIPAA This update of the scap-security-guide packages adds the Health Insurance Portability and Accountability Act (HIPAA) profile to the RHEL 8 security compliance content. This profile implements recommendations outlined on the The HIPAA Privacy Rule website. The HIPAA Security Rule establishes U.S. national standards to protect individuals' electronic personal health information that is created, received, used, or maintained by a covered entity. The Security Rule requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronically protected health information. ( BZ#1832760 ) scap-security-guide rebased to 0.1.50 The scap-security-guide packages, which contain the latest set of security policies for Linux systems, have been upgraded to version 0.1.50. This update includes bug fixes and enhancements, most notably: Ansible content has been improved: numerous rules contain Ansible remediations for the first time and other rules have been updated to address bug fixes. Fixes and improvements to the scap-security-guide content for scanning RHEL7 systems, including: The scap-security-guide packages now provide a profile aligned with the CIS RHEL 7 Benchmark v2.2.0. Note that the rpm_verify_permissions rule in the CIS profile does not work correctly; see the rpm_verify_permissions fails in the CIS profile known issue. The SCAP Security Guide profiles now correctly disable and mask services that should not be started. The audit_rules_privileged_commands rule in the scap-security-guide packages now works correctly for privileged commands. Remediation of the dconf_gnome_login_banner_text rule in the scap-security-guide packages no longer incorrectly fails. ( BZ#1815007 ) SCAP Workbench can now generate results-based remediations from tailored profiles With this update, you can now generate result-based remediation roles from tailored profiles using the SCAP Workbench tool. (BZ#1640715) New Ansible role provides automated deployments of Clevis clients This update of the rhel-system-roles package introduces the nbde_client RHEL system role. This Ansible role enables you to deploy multiple Clevis clients in an automated way. ( BZ#1716040 ) New Ansible role can now set up a Tang server With this enhancement, you can deploy and manage a Tang server as part of an automated disk encryption solution with the new nbde_server system role. The nbde_server Ansible role, which is included in the rhel-system-roles package, supports the following features: Rotating Tang keys Deploying and backing up Tang keys For more information, see Rotating Tang server keys . ( BZ#1716039 ) clevis rebased to version 13 The clevis packages have been rebased to version 13, which provides multiple bug fixes and enhancements. Notable changes include: clevis luks unlock can be used in the device with a key file in the non-interactive mode. clevis encrypt tpm2 parses the pcr_ids field if the input is given as a JSON array. The clevis-luks-unbind(1) man page no longer refers only to LUKS v1. clevis luks bind does not write to an inactive slot anymore, if the password given is incorrect. clevis luks bind now works while the system uses the non-English locale. Added support for tpm2-tools 4.x. ( BZ#1818780 ) clevis luks edit enables you to edit a specific pin configuration This update of the clevis packages introduces the new clevis luks edit subcommand that enables you to edit a specific pin configuration. For example, you can now change the URL address of a Tang server and the pcr_ids parameter in a TPM2 configuration. You can also add and remove new sss pins and change the threshold of an sss pin. (BZ#1436735) clevis luks bind -y now allows automated binding With this enhancement, Clevis supports automated binding with the -y parameter. You can now use the -y option with the clevis luks bind command, which automatically answers subsequent prompts with yes . For example, when using a Tang pin, you are no longer required to manually trust Tang keys. (BZ#1819767) fapolicyd rebased to version 1.0 The fapolicyd packages have been rebased to version 1.0, which provides multiple bug fixes and enhancements. Notable changes include: The multiple thread synchronization problem has been resolved. Enhanced performance with reduced database size and loading time. A new trust option for the fapolicyd package in the fapolicyd.conf file has been added to customize trust back end. You can add all trusted files, binaries, and scripts to the new /etc/fapolicyd/fapolicyd.trust file. You can manage the fapolicyd.trust file using the CLI. You can clean or dump the database using the CLI. The fapolicyd package overrides the magic database for better decoding of scripts. The CLI prints MIME type of the file similar to the file command according to the override. The /etc/fapolicyd/fapolicyd.rules file supports a group of values as attribute values. The fapolicyd daemon has a syslog_format option for setting the format of the audit/sylog events. ( BZ#1817413 ) fapolicyd now provides its own SELinux policy in fapolicyd-selinux With this enhancement, the fapolicyd framework now provides its own SELinux security policy. The daemon is confined under the fapolicyd_t domain and the policy is installed through the fapolicyd-selinux subpackage. ( BZ#1714529 ) USBGuard rebased to version 0.7.8 The usbguard packages have been rebased to version 0.7.8 which provides multiple bug fixes and enhancements. Notable changes include: The HidePII=true|false parameter in the /etc/usbguard/usbguard-daemon.conf file can now hide personally identifiable information from audit entries. The AuthorizedDefault=keep|none|all|internal parameter in the /etc/usbguard/usbguard-daemon.conf file can predefine authorization state of controller devices. With the new with-connect-type rule attribute, users can now distinguish the connection type of the device. Users can now append temporary rules with the -t option. Temporary rules remain in memory only until the daemon restarts. usbguard list-rules can now filter rules according to certain properties. usbguard generate-policy can now generate a policy for specific devices. The usbguard allow|block|reject command can now handle rule strings, and a target is applied on each device that matches the specified rule string. New subpackages usbguard-notifier and usbguard-selinux are included. ( BZ#1738590 ) USBGuard provides many improvements for corporate desktop users This addition to the USBGuard project contains enhancements and bug fixes to improve the usability for corporate desktop users. Important changes include: For keeping the /etc/usbguard/rules.conf rule file clean, users can define multiple configuration files inside the RuleFolder=/etc/usbguard/rules.d/ directory. By default, the RuleFolder is specified in the /etc/usbguard-daemon.conf file. The usbguard-notifier tool now provides GUI notifications. The tool notifies the user whenever a device is plugged in or plugged out and whether the device is allowed, blocked, or rejected by any user. You can now include comments in the configuration files, because the usbguard-daemon no longer parses lines starting with # . ( BZ#1667395 ) USBGuard now provides its own SELinux policy in usbguard-selinux With this enhancement, the USBGuard framework now provides its own SELinux security policy. The daemon is confined under the usbguard_t domain and the policy is installed through the usbguard-selinux subpackage. ( BZ#1683567 ) libcap now supports ambient capabilities With this update, users are able to grant ambient capabilities at login and prevent the need to have root access for the appropriately configured processes. (BZ#1487388) The libseccomp library has been rebased to version 2.4.3 The libseccomp library, which provides an interface to the seccomp system call filtering mechanism, has been upgraded to version 2.4.3. This update provides numerous bug fixes and enhancements. Notable changes include: Updated the syscall table for Linux v5.4-rc4. No longer defining __NR_x values for system calls that do not exist. __SNR_x is now used internally. Added define for __SNR_ppoll . Fixed a multiplexing issue with s390/s390x shm* system calls. Removed the static flag from the libseccomp tools compilation. Added support for io-uring related system calls. Fixed the Python module naming issue introduced in the v2.4.0 release; the module is named seccomp as it was previously. Fixed a potential memory leak identified by clang in the scmp_bpf_sim tool. ( BZ#1770693 ) omamqp1 module is now supported With this update, the AMQP 1.0 protocol supports sending messages to a destination on the bus. Previously, Openstack used the AMQP1 protocol as a communication standard, and this protocol can now log messages in AMQP messages. This update introduces the rsyslog-omamqp1 sub-package to deliver the omamqp1 output mode, which logs messages and sends them to the destination on the bus. ( BZ#1713427 ) OpenSCAP compresses remote content With this update, OpenSCAP uses gzip compression for transferring remote content. The most common type of remote content is text-based CVE feeds, which increase in size over time and typically have to be downloaded for every scan. The gzip compression reduces the bandwidth to 10% of bandwidth needed for uncompressed content. As a result, this reduces bandwidth requirements across the entire chain between the scanned system and the server that hosts the remote content. ( BZ#1855708 ) SCAP Security Guide now provides a profile aligned with NIST-800-171 With this update, the scap-security-guide packages provide a profile aligned with the NIST-800-171 standard. The profile enables you to harden the system configuration in accordance with security requirements for protection of Controlled Unclassified Information (CUI) in non-federal information systems. As a result, you can more easily configure systems to be aligned with the NIST-800-171 standard. ( BZ#1762962 ) 5.1.7. Networking The IPv4 and IPv6 connection tracking modules have been merged into the nf_conntrack module This enhancement merges the nf_conntrack_ipv4 and nf_conntrack_ipv6 Netfilter connection tracking modules into the nf_conntrack kernel module. Due to this change, blacklisting the address family-specific modules no longer work in RHEL 8.3, and you can blacklist only the nf_conntrack module to disable connection tracking support for both the IPv4 and IPv6 protocols. (BZ#1822085) firewalld rebased to version 0.8.2 The firewalld packages have been upgraded to upstream version 0.8.2, which provides a number of bug fixes over the version. For details, see the firewalld 0.8.2 Release Notes . ( BZ#1809636 ) NetworkManager rebased to version 1.26.0 The NetworkManager packages have been upgraded to upstream version 1.26.0, which provides a number of enhancements and bug fixes over the version: NetworkManager resets the auto-negotiation, speed, and duplex setting to their original value when deactivating a device. Wi-Fi profiles connect now automatically if all activation attempts failed. This means that an initial failure to auto-connect to the network no longer blocks the automatism. A side effect is that existing Wi-Fi profiles that were previously blocked now connect automatically. The nm-settings-nmcli(5) and nm-settings-dbus(5) man pages have been added. Support for a number of bridge parameters has been added. Support for virtual routing and forwarding (VRF) interfaces has been added. For further details, see Permanently reusing the same IP address on different interfaces . Support for Opportunistic Wireless Encryption mode (OWE) for Wi-Fi networks has been added. NetworkManager now supports 31-bit prefixes on IPv4 point-to-point links according to RFC 3021 . The nmcli utility now supports removing settings using the nmcli connection modify <connection_name> remove <setting> command. NetworkManager no longer creates and activates slave devices if a master device is missing. For further information about notable changes, read the upstream release notes: NetworkManager 1.26.0 NetworkManager 1.24.0 ( BZ#1814746 ) XDP is conditionally supported Red Hat supports the eXpress Data Path (XDP) feature only if all of the following conditions apply: You load the XDP program on an AMD or Intel 64-bit architecture You use the libxdp library to load the program into the kernel The XDP program uses one of the following return codes: XDP_ABORTED , XDP_DROP , or XDP_PASS The XDP program does not use the XDP hardware offloading For details about unsupported XDP features, see Overview of XDP features that are available as Technology Preview ( BZ#1889736 ) xdp-tools is partially supported The xdp-tools package, which contains user space support utilities for the kernel eXpress Data Path (XDP) feature, is now supported on the AMD and Intel 64-bit architectures. This includes the libxdp library, the xdp-loader utility for loading XDP programs, and the xdp-filter example program for packet filtering. Note that the xdpdump utility for capturing packets from a network interface with XDP enabled is still a Technology Preview. (BZ#1820670) The dracut utility by default now uses NetworkManager in initial RAM disk Previously, the dracut utility was using a shell script to manage networking in the initial RAM disk, initrd . In certain cases, this could cause problems. For example, the NetworkManager sends another DHCP request, even if the script in the RAM disk has already requested an IP address, which could result in a timeout. With this update, the dracut by default now uses the NetworkManager in the initial RAM disk and prevents the system from running into issues. In case you want to switch back to the implementation, and recreate the RAM disk images, use the following commands: (BZ#1626348) Network configuration in the kernel command line has been consolidated under the ip parameter The ipv6 , netmask , gateway , and hostname parameters to set the network configuration in the kernel command line have been consolidated under the ip parameter. The ip parameter accepts different formats, such as the following: For further details about the individual fields and other formats this parameter accepts, see the description of the ip parameter in the dracut.cmdline(7) man page. The ipv6 , netmask , gateway , and hostname parameters are no longer available in RHEL 8. (BZ#1905138) 5.1.8. Kernel Kernel version in RHEL 8.3 Red Hat Enterprise Linux 8.3 is distributed with the kernel version 4.18.0-240. ( BZ#1839151 ) Extended Berkeley Packet Filter for RHEL 8.3 The Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes a special assembly-like code. The eBPF bytecode first loads to the kernel, followed by its verification, code translation to the native machine code with just-in-time compilation, and then the virtual machine executes the code. Red Hat ships numerous components that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.3, the following eBPF components are supported: The BPF Compiler Collection (BCC) tools package, which provides tools for I/O analysis, networking, and monitoring of Linux operating systems using eBPF The BCC library which allows the development of tools similar to those provided in the BCC tools package. The eBPF for Traffic Control (tc) feature, which enables programmable packet processing inside the kernel network data path. The eXpress Data Path (XDP) feature, which provides access to received packets before the kernel networking stack processes them, is supported under specific conditions. For more details, refer to the Networking section of Relase Notes. The libbpf package, which is crucial for bpf related applications like bpftrace and bpf/xdp development. For more details, refer to the dedicated release note libbpf fully supported . The xdp-tools package, which contains userspace support utilities for the XDP feature, is now supported on the AMD and Intel 64-bit architectures.This includes the libxdp library, the xdp-loader utility for loading XDP programs, and the xdp-filter example program for packet filtering. Note that the xdpdump utility for capturing packets from a network interface with XDP enabled is still an unsupported Technology Preview. For more details, refer to the Networking section of Release Notes. Note that all other eBPF components are available as Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as Technology Preview: The bpftrace tracing language The AF_XDP socket for connecting the eXpress Data Path (XDP) path to user space For more information regarding the Technology Preview components, see Technology Previews . ( BZ#1780124 ) Cornelis Networks Omni-Path Architecture (OPA) Host Software Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.3. OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. ( BZ#1893174 ) TSX is now disabled by default Starting with RHEL 8.3, the kernel now has the Intel(R) Transactional Synchronization Extensions (TSX) technology disabled by default to improve the OS security. The change applies to those CPUs that support disabling TSX , including the 2nd Generation Intel(R) Xeon(R) Scalable Processors (formerly known as Cascade Lake with Intel(R) C620 Series Chipsets). For users whose applications do not use TSX , the change removes the default performance penalty of the TSX Asynchronous Abort (TAA) mitigations on the 2nd Generation Intel(R) Xeon(R) Scalable Processors. The change also aligns the RHEL kernel behavior with upstream, where TSX has been disabled by default since Linux 5.4. To enable TSX , add the tsx=on parameter to the kernel command line. (BZ#1828642) RHEL 8.3 now supports the page owner tracking feature With this update, you can use the page owner tracking feature to observe the kernel memory utilization at the page allocation level. To enable the page tracker, execute the following steps : As a result, the page owner tracker will track the kernel memory consumption, which helps to debug kernel memory leaks and detect the drivers that use a lot of memory. (BZ#1825414) EDAC for AMD EPYCTM 7003 Series Processors is now supported This enhancement provides Error Detection And Correction (EDAC) device support for AMD EPYCTM 7003 Series Processors. Previously, corrected (CEs) and uncorrected (UEs) memory errors were not reported on systems based on AMD EPYCTM 7003 Series Processors. With this update, such errors will now be reported using EDAC. (BZ#1735611) Flamegraph is now supported with perf tool With this update, the perf command line tool supports flamegraphs to create a graphical representation of the system's performance. The perf data is grouped together into samples with similar stack backtraces. As a result, this data is converted into a visual representation to allow easier identification of computationally intensive areas of code. To generate a flamegraph using the perf tool, execute the following commands: Note : To generate flamegraphs, install the js-d3-flame-graph rpm. (BZ#1281843) /dev/random and /dev/urandom are now conditionally powered by the Kernel Crypto API DRBG In FIPS mode, the /dev/random and /dev/urandom pseudorandom number generators are powered by the Kernel Crypto API Deterministic Random Bit Generator (DRBG). Applications in FIPS mode use the mentioned devices as a FIPS-compliant noise source, therefore the devices have to employ FIPS-approved algorithms. To achieve this goal, necessary hooks have been added to the /dev/random driver. As a result, the hooks are enabled in the FIPS mode and cause /dev/random and /dev/urandom to connect to the Kernel Crypto API DRBG. (BZ#1785660) libbpf fully supported The libbpf package, crucial for bpf related applications like bpftrace and bpf/xdp development, is now fully supported. It is a mirror of bpf- linux tree bpf-/tools/lib/bpf directory plus its supporting header files. The version of the package reflects the version of the Application Binary Interface (ABI). (BZ#1759154) lshw utility now provides additional CPU information With this enhancement, the List Hardware utility ( lshw ) displays more CPU information. The CPU version field now provides the family, model and stepping details of the system processors in numeric format as version: <family>.<model>.<stepping> . ( BZ#1794049 ) kernel-rt source tree has been updated to the RHEL 8.3 tree The kernel-rt sources have been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.6.14-rt7. Both of these updates provide a number of bug fixes and enhancements. (BZ#1818138, BZ#1818142) tpm2-tools rebased to version 4.1.1 The tpm2-tools package has been upgraded to version 4.1.1, which provides a number of command additions, updates, and removals. For more details, see the Updates to tpm2-tools package in RHEL8.3 solution. (BZ#1789682) The Mellanox ConnectX-6 Dx network adapter is now fully supported This enhancement adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the mlx5_core driver. On hosts that use this adapter, RHEL loads the mlx5_core driver automatically. This feature, previously available as a technology preview, is now fully supported in RHEL 8.3. (BZ#1782831) mlxsw driver rebased to version 5.7 The mlxsw driver is upgraded to upstream version 5.7 and include following new features: The shared buffer occupancy feature, which provides buffer occupancy data. The packet drop feature, which enables monitoring the layer 2 , layer 3 , tunnels and access control list drops. Packet trap policers support. Default port priority configuration support using Link Layer Discovery Protocol (LLDP) agent. Enhanced Transmission Selection (ETS) and Token Bucket Filter (TBF) queuing discipline offloading support. RED queuing discipline nodrop mode is enabled to prevent early packet drops. Traffic class SKB editing action skbedit priority feature enables changing packets metadata and it complements with pedit Traffic Class Offloading (TOS). (BZ#1821646) The crash kernel now expands memory reserve for kdump With this enhancement, the crashkernel=auto argument now reserves more memory on machines with 4GB to 64GB memory capacity. Previously, due to limited memory reserve, the crash kernel failed to capture the crash dump as the kernel space and user space memory expanded. As a consequence, the crash kernel experienced an out-of-memory (OOM) error. This update helps to reduce the OOM error occurrences in the described scenario and expands the memory capacity for kdump accordingly. (BZ#1746644) 5.1.9. File systems and storage LVM can now manage VDO volumes LVM now supports the Virtual Data Optimizer (VDO) segment type. As a result, you can now use LVM utilities to create and manage VDO volumes as native LVM logical volumes. VDO provides inline block-level deduplication, compression, and thin provisioning features. For more information, see Deduplicating and compressing logical volumes on RHEL . (BZ#1598199) The SCSI stack now works better with high-performance adapters The performance of the SCSI stack has been improved. As a result, -generation, high performance host bus adapters (HBAs) are now capable of higher IOPS (I/Os per second) on RHEL. (BZ#1761928) The megaraid_sas driver has been updated to the latest version The megaraid_sas driver has been updated to version 07.713.01.00-rc1. This update provides several bug fixes and enhancements relating to improving performance, better stability of supported MegaRAID adapters, and a richer feature set. (BZ#1791041) Stratis now lists the pool name on error When you attempt to create a Stratis pool on a block device that is already in use by an existing Stratis pool, the stratis utility now reports the name of the existing pool. Previously, the utility listed only the UUID label of the pool. ( BZ#1734496 ) FPIN ELS frame notification support The lpfc Fibre Channel (FC) driver now supports Fabric Performance Impact Notifications (FPINs) regarding link integrity, which help identify link level issues and allows the switch to choose a more reliable path. (BZ#1796565) New commands to debug LVM on-disk metadata The pvck utility, which is available from the lvm2 package, now provides low-level commands to debug or rescue LVM on-disk metadata on physical volumes: To extract metadata, use the pvck --dump command. To repair metadata, use the pvck --repair command. For more information, see the pvck(8) man page. (BZ#1541165) LVM RAID supports DM integrity to prevent data loss due to corrupted data on a device It is now possible to add Device Mapper (DM) integrity to an LVM RAID configuration to prevent data loss. The integrity layer detects data corruption on a device and alerts the RAID layer to fix the corrupted data across the LVM RAID. While RAID prevents data loss due to device failure, adding integrity to an LVM RAID array prevents data loss due to corrupted data on a device. You can add the integrity layer when you create a new LVM RAID, or you can add it to an LVM RAID that already exists. (JIRA:RHELPLAN-39320) Resilient Storage (GFS2) supported on AWS, Azure, and Aliyun public clouds Resilient Storage (GFS2) is now supported on three major public clouds, Amazon (AWS), Microsoft (Azure) and Alibaba (Aliyun) with the introduction of shared block device support on those platforms. As a result GFS2 is now a true hybrid cloud cluster filesystem with options to use both on premises and in the public cloud. For information on configuring shared block storage on Microsoft Azure and on AWS, see Deploying RHEL 8 on Microsoft Azure and Deploying RHEL 8 on Amazon Web Services . For information on configuring shared block storage on Alibaba Cloud, see Configuring Shared Block Storage for a Red Hat High Availability Cluster on Alibaba Cloud . ( BZ#1900019 ) Userspace now supports the latest nfsdcld daemon Userspace now supports the lastest nfsdcld daemon, which is the only namespace-aware client tracking method. This enhancement ensures client open or lock recovery from the containerized knfsd daemon without any data corruption. ( BZ#1817756 ) nconnect now supports multiple concurrent connections With this enhancement, you can use the nconnect functionality to create multiple concurrent connections to an NFS server, allowing for a different load balancing ability. Enable the nconnect functionality with the nconnect=X NFS mount option, where X is the number of concurrent connections to use. The current limit is 16. (BZ#1683394, BZ#1761352) nfsdcld daemon for client information tracking is now supported With this enhancement, the nfsdcld daemon is now the default method in tracking per-client information on a stable storage. As a result, the NFS v4 running in containers allows the clients to reclaim the opens or locks after a server restart. (BZ#1817752) 5.1.10. High availability and clusters pacemaker rebased to version 2.0.4 The Pacemaker cluster resource manager has been upgraded to upstream version 2.0.4, which provides a number of bug fixes. ( BZ#1828488 ) New priority-fencing-delay cluster property Pacemaker now supports the new priority-fencing-delay cluster property, which allows you to configure a two-node cluster so that in a split-brain situation the node with the fewest resources running is the node that gets fenced. The priority-fencing-delay property can be set to a time duration. The default value for this property is 0 (disabled). If this property is set to a non-zero value, and the priority meta-attribute is configured for at least one resource, then in a split-brain situation the node with the highest combined priority of all resources running on it will be more likely to survive. For example, if you set pcs resource defaults priority=1 and pcs property set priority-fencing-delay=15s and no other priorities are set, then the node running the most resources will be more likely to survive because the other node will wait 15 seconds before initiating fencing. If a particular resource is more important than the rest, you can give it a higher priority. The node running the master role of a promotable clone will get an extra 1 point if a priority has been configured for that clone. Any delay set with priority-fencing-delay will be added to any delay from the pcmk_delay_base and pcmk_delay_max fence device properties. This behavior allows some delay when both nodes have equal priority, or both nodes need to be fenced for some reason other than node loss (for example, on-fail=fencing is set for a resource monitor operation). If used in combination, it is recommended that you set the priority-fencing-delay property to a value that is significantly greater than the maximum delay from pcmk_delay_base and pcmk_delay_max , to be sure the prioritized node is preferred (twice the value would be completely safe). ( BZ#1784601 ) New commands for managing multiple sets of resource and operation defaults It is now possible to create, list, change and delete multiple sets of resource and operation defaults. When you create a set of default values, you can specify a rule that contains resource and op expressions. This allows you, for example, to configure a default resource value for all resources of a particular type. Commands that list existing default values now include multiple sets of defaults in their output. The pcs resource [op] defaults set create command creates a new set of default values. When specifying rules with this command, only resource and op expressions, including and , or and parentheses, are allowed. The pcs resource [op] defaults set delete | remove command removes sets of default values. The pcs resource [op] defaults set update command changes the default values in a set. (BZ#1817547) Support for tagging cluster resources It is now possible to tag cluster resources in a Pacemaker cluster with the pcs tag command. This feature allows you to administer a specified set of resources with a single command. You can also use the pcs tag command to remove or modify a resource tag, and to display the tag configuration. The pcs resource enable , pcs resource disable , pcs resource manage , and pcs resource unmanage commands accept tag IDs as arguments. ( BZ#1684676 ) Pacemaker now supports recovery by demoting a promoted resource rather than fully stopping it It is now possible to configure a promotable resource in a Pacemaker cluster so that when a promote or monitor action fails for that resource, or the partition in which the resource is running loses quorum, the resource will be demoted but will not be fully stopped. This feature can be useful when you would prefer that the resource continue to be available in the unpromoted mode. For example, if a database master's partition loses quorum, you might prefer that the database resource lose the Master role, but stay alive in read-only mode so applications that only need to read can continue to work despite the lost quorum. This feature can also be useful when a successful demote is both sufficient for recovery and much faster than a full restart. To support this feature: The on-fail operation meta-attribute now accepts a demote value when used with promote actions, as in the following example: The on-fail operation meta-attribute now accepts a demote value when used with monitor actions with both interval set to a nonzero value and role set to Master , as in the following example: The no-quorum-policy cluster property now accepts a demote value. When set, if a cluster partition loses quorum, any promoted resources will be demoted but left running and all other resources will be stopped. Specifying a demote meta-attribute for an operation does not affect how promotion of a resource is determined. If the affected node still has the highest promotion score, it will be selected to be promoted again. (BZ#1837747, BZ#1843079 ) New SBD_SYNC_RESOURCE_STARTUP SBD configuration parameter to improve synchronization with Pacemaker To better control synchronization between SBD and Pacemaker, the /etc/sysconfig/sbd file now supports the SBD_SYNC_RESOURCE_STARTUP parameter. When Pacemaker and SBD packages from RHEL 8.3 or later are installed and SBD is configured with SBD_SYNC_RESOURCE_STARTUP=true , SBD contacts the Pacemaker daemon for information about the daemon's state. In this configuration, the Pacemaker daemon will wait until it has been contacted by SBD, both before starting its subdaemons and before final exit. As a result, Pacemaker will not run resources if SBD cannot actively communicate with it, and Pacemaker will not exit until it has reported a graceful shutdown to SBD. This prevents the unlikely situation that might occur during a graceful shutdown when SBD fails to detect the brief moment when no resources are running before Pacemaker finally disconnects, which would trigger an unneeded reboot. Detecting a graceful shutdown using a defined handshake works in maintenance mode as well. The method of detecting a graceful shutdown on the basis of no running resources left had to be disabled in maintenance mode since running resources would not be touched on shutdown. In addition, enabling this feature avoids the risk of a split-brain situation in a cluster when SBD and Pacemaker both start successfully but SBD is unable to contact pacemaker. This could happen, for example, due to SELinux policies. In this situation, Pacemaker would assume that SBD is functioning when it is not. With this new feature enabled, Pacemaker will not complete startup until SBD has contacted it. Another advantage of this new feature is that when it is enabled SBD will contact Pacemaker repeatedly, using a heartbeat, and it is able to panic the node if Pacemaker stops responding at any time. Note If you have edited your /etc/sysconfig/sbd file or configured SBD through PCS, then an RPM upgrade will not pull in the new SBD_SYNC_RESOURCE_STARTUP parameter. In these cases, to implement this feature you must manually add it from the /etc/sysconfig/sbd.rpmnew file or follow the procedure described in the Configuration via environment section of the sbd (8) man page. ( BZ#1718324 , BZ#1743726 ) 5.1.11. Dynamic programming languages, web and database servers A new module stream: ruby:2.7 RHEL 8.3 introduces Ruby 2.7.1 in a new ruby:2.7 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 2.6 distributed with RHEL 8.1. Notable enhancements include: A new Compaction Garbage Collector (GC) has been introduced. This GC can defragment a fragmented memory space. Ruby yet Another Compiler-Compiler (Racc) now provides a command-line interface for the one-token Look-Ahead Left-to-Right - LALR(1) - parser generator. Interactive Ruby Shell ( irb ), the bundled Read-Eval-Print Loop (REPL) environment, now supports multi-line editing. Pattern matching, frequently used in functional programming languages, has been introduced as an experimental feature. Numbered parameter as the default block parameter has been introduced as an experimental feature. The following performance improvements have been implemented: Fiber cache strategy has been changed to accelerate fiber creation. Performance of the CGI.escapeHTML method has been improved. Performance of the Monitor class and MonitorMixin module has been improved. In addition, automatic conversion of keyword arguments and positional arguments has been deprecated. In Ruby 3.0, positional arguments and keyword arguments will be separated. For more information, see the upstream documentation . To suppress warnings against experimental features, use the -W:no-experimental command-line option. To disable a deprecation warning, use the -W:no-deprecated command-line option or add Warning[:deprecated] = false to your code. To install the ruby:2.7 module stream, use: If you want to upgrade from the ruby:2.6 stream, see Switching to a later stream . (BZ#1817135) A new module stream: nodejs:14 A new module stream, nodejs:14 , is now available. Node.js 14 , included in RHEL 8.3, provides numerous new features and bug and security fixes over Node.js 12 distributed in RHEL 8.1. Notable changes include: The V8 engine has been upgraded to version 8.3. A new experimental WebAssembly System Interface (WASI) has been implemented. A new experimental Async Local Storage API has been introduced. The diagnostic report feature is now stable. The streams APIs have been hardened. Experimental modules warnings have been removed. With the release of the RHEA-2020:5101 advisory, RHEL 8 provides Node.js 14.15.0 , which is the most recent Long Term Support (LTS) version with improved stability. To install the nodejs:14 module stream, use: If you want to upgrade from the nodejs:12 stream, see Switching to a later stream . (BZ#1815402, BZ#1891809 ) git rebased to version 2.27 The git packages have been upgraded to upstream version 2.27. Notable changes over the previously available version 2.18 include: The git checkout command has been split into two separate commands: git switch for managing branches git restore for managing changes within the directory tree The behavior of the git rebase command is now based on the merge workflow by default rather than the patch+apply workflow. To preserve the behavior, set the rebase.backend configuration variable to apply . The git difftool command can now be used also outside a repository. Four new configuration variables, {author,committer}.{name,email} , have been introduced to override user.{name,email} in more specific cases. Several new options have been added that enable users to configure SSL for communication with proxies. Handling of commits with log messages in non-UTF-8 character encoding has been improved in the git fast-export and git fast-import utilities. The lfs extension has been added as a new git-lfs package. Git Large File Storage (LFS) replaces large files with text pointers inside Git and stores the file contents on a remote server. ( BZ#1825114 , BZ#1783391) Changes in Python RHEL 8.3 introduces the following changes to the python38:3.8 module stream: The Python interpreter has been updated to version 3.8.3, which provides several bug fixes. The python38-pip package has been updated to version 19.3.1, and pip now supports installing manylinux2014 wheels. Performance of the Python 3.6 interpreter, provided by the python3 packages, has been significantly improved. The ubi8/python-27 , ubi8/python-36 , and ubi8/python-38 container images now support installing the pipenv utility from a custom package index or a PyPI mirror if provided by the customer. Previously, pipenv could only be downloaded from the upstream PyPI repository, and if the upstream repository was unavailable, the installation failed. ( BZ#1847416 , BZ#1724996 , BZ#1827623, BZ#1841001 ) A new module stream: php:7.4 RHEL 8.3 introduces PHP 7.4 , which provides a number of bug fixes and enhancements over version 7.3. This release introduces a new experimental extension, Foreign Function Interface (FFI), which enables you to call native functions, access native variables, and create and access data structures defined in C libraries. The FFI extension is available in the php-ffi package. The following extensions have been removed: The wddx extension, removed from php-xml package The recode extension, removed from the php-recode package. To install the php:7.4 module stream, use: If you want to upgrade from the php:7.3 stream, see Switching to a later stream . For details regarding PHP usage on RHEL 8, see Using the PHP scripting language . ( BZ#1797661 ) A new module stream: nginx:1.18 The nginx 1.18 web and proxy server, which provides a number of bug fixes, security fixes, new features and enhancements over version 1.16, is now available. Notable changes include: Enhancements to HTTP request rate and connection limiting have been implemented. For example, the limit_rate and limit_rate_after directives now support variables, including new USDlimit_req_status and USDlimit_conn_status variables. In addition, dry-run mode has been added for the limit_conn_dry_run and limit_req_dry_run directives. A new auth_delay directive has been added, which enables delayed processing of unauthorized requests. The following directives now support variables: grpc_pass , proxy_upload_rate , and proxy_download_rate . Additional PROXY protocol variables have been added, namely USDproxy_protocol_server_addr and USDproxy_protocol_server_port . To install the nginx:1.18 stream, use: If you want to upgrade from the nginx:1.16 stream, see Switching to a later stream . ( BZ#1826632 ) A new module stream: perl:5.30 RHEL 8.3 introduces Perl 5.30 , which provides a number of bug fixes and enhancements over the previously released Perl 5.26 . The new version also deprecates or removes certain language features. Notable changes with significant impact include: The Math::BigInt::CalcEmu , arybase , and B::Debug modules have been removed File descriptors are now opened with a close-on-exec flag Opening the same symbol as a file and as a directory handle is no longer allowed Subroutine attributes now must precede subroutine signatures The :locked and :uniq attributes have been removed Comma-less variable lists in formats are no longer allowed A bare << here-document operator is no longer allowed Certain formerly deprecated uses of an unescaped left brace ( { ) character in regular expression patterns are no longer permitted The AUTOLOAD() subroutine can no longer be inherited to non-method functions The sort pragma no longer allows specifying a sort algorithm The B::OP::terse() subroutine has been replaced by the B::Concise::b_terse() subroutine The File::Glob::glob() function has been replaced by the File::Glob::bsd_glob() function The dump() function now must be invoked fully qualified as CORE::dump() The yada-yada operator ( ... ) is a statement now, it cannot be used as an expression Assigning a non-zero value to the USD[ variable now returns a fatal error The USD* and USD# variables are no longer allowed Declaring variables using the my() function in a false condition branch is no longer allowed Using the sysread() and syswrite() functions on the :utf8 handles now returns a fatal error The pack() function no longer returns malformed UTF-8 format Unicode code points with a value greater than IV_MAX are no longer allowed Unicode 12.1 is now supported To upgrade from an earlier perl module stream, see Switching to a later stream . Perl 5.30 is also available as an s2i-enabled ubi8/perl-530 container image. ( BZ#1713592 , BZ#1732828 ) A new module stream: perl-libwww-perl:6.34 RHEL 8.3 introduces a new perl-libwww-perl:6.34 module stream, which provides the perl-libwww-perl package for all versions of Perl available in RHEL 8. The non-modular perl-libwww-perl package, available since RHEL 8.0, which cannot be used with other Perl streams than 5.26, has been obsoleted by the new default perl-libwww-perl:6.34 stream. ( BZ#1781177 ) A new module stream: perl-IO-Socket-SSL:2.066 A new perl-IO-Socket-SSL:2.066 module stream is now available. This module provides the perl-IO-Socket-SSL and perl-Net-SSLeay packages and it is compatible with all Perl streams available in RHEL 8. ( BZ#1824222 ) The squid:4 module stream rebased to version 4.11 The Squid proxy server, provided by the squid:4 module stream, has been upgraded from version 4.4 to version 4.11. This release provides multiple bug and security fixes, and various enhancements, such as new configuration options. (BZ#1829467) Changes in the httpd:2.4 module stream RHEL 8.3 introduces the following notable changes to the Apache HTTP Server, available through the httpd:2.4 module stream: The mod_http2 module rebased to version 1.15.7 Configuration changes in the H2Upgrade and H2Push directives A new H2Padding configuration directive to control padding of the HTTP/2 payload frames Numerous bug fixes. ( BZ#1814236 ) Support for logging to journald from the CustomLog directive in httpd It is now possible to output access (transfer) logs to journald from the Apache HTTP Server by using a new option for the CustomLog directive. The supported syntax is as follows: where priority is any priority string up to debug as used in the LogLevel directive . For example, to log to journald using the the combined log format, use: Note that when using this option, the server performance might be lower than when logging directly to flat files. ( BZ#1209162 ) 5.1.12. Compilers and development tools .NET 5 is now available on RHEL .NET 5 is available on Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 8, and OpenShift Container Platform. .NET 5 includes new language versions: C# 9 and F# 5.0. Significant performance improvements were made in the base libraries, GC and JIT. .NET 5 has single file applications, which allows you to distribute .NET applications as a single executable, with all dependencies included. UBI8 images for .NET 5 are available from Red Hat container registry and can be used with OpenShift. To use .NET 5, install the dotnet-sdk-5.0 package: For more information, see the .NET 5 documentation . ( BZ#1944677 ) New GCC Toolset 10 GCC Toolset 10 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The GCC compiler has been updated to version 10.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. The following tools and versions are provided by GCC Toolset 10: Tool Version GCC 10.2.1 GDB 9.2 Valgrind 3.16.0 SystemTap 4.3 Dyninst 10.1.0 binutils 2.35 elfutils 0.180 dwz 0.12 make 4.2.1 strace 5.7 ltrace 0.7.91 annobin 9.29 To install GCC Toolset 10, run the following command as root: To run a tool from GCC Toolset 10: To run a shell session where tool versions from GCC Toolset 10 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 10 components are available in the two container images: rhel8/gcc-toolset-10-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-10-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 10 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. For details regarding the container images, see Using the GCC Toolset container images . (BZ#1842656) Rust Toolset rebased to version 1.45.2 Rust Toolset has been updated to version 1.45.2. Notable changes include: The subcommand cargo tree for viewing dependencies is now included in cargo . Casting from floating point values to integers now produces a clamped cast. Previously, when a truncated floating point value was out of range for the target integer type the result was undefined behaviour of the compiler. Non-finite floating point values led to undefined behaviour as well. With this enhancement, finite values are clamped either to the minimum or the maximum range of the integer. Positive and negative infinity values are by default clamped to the maximum and minimum integer respectively, Not-a-Number(NaN) values to zero. Function-like procedural macros in expressions, patterns, and statements are now extended and stabilized. For detailed instructions regarding usage, see Using Rust Toolset . (BZ#1820593) LLVM Toolset rebased to version 10.0.1 LLVM Toolset has been upgraded to version 10.0.1. With this update, the clang-libs packages no longer include individual component libraries. As a result, it is no longer possible to link applications against them. To link applications against the clang libraries, use the libclang-cpp.so package. For more information, see Using LLVM Toolset . (BZ#1820587) Go Toolset rebased to version 1.14.7 Go Toolset has been upgraded to version 1.14.7 Notable changes include: The Go module system is now fully supported. SSL version 3.0 (SSLv3) is no longer supported. Notable Delve debugger enhancements include: The new command examinemem (or x ) for examining raw memory The new command display for printing values of an expression during each stop of the program The new --tty flag for supplying a Teletypewriter (TTY) for the debugged program The new coredump support for Arm64 The new ability to print goroutine labels The release of the Debug Adapter Protocol (DAP) server The improved output from dlv trace and trace REPL (read-eval-print-loop) commands For more information on Go Toolset, see Using Go Toolset . For more information on Delve, see the upstream Delve documentation . (BZ#1820596) SystemTap rebased to version 4.3 The SystemTap instrumentation tool has been updated to version 4.3, which provides multiple bug fixes and enhancements. Notable changes include: Userspace probes can be targeted by hexadecimal buildid from readelf -n . This alternative to a path name enables matching binaries to be probed under any name, and thus allows a single script to target a range of different versions. This feature works well in conjunction with the elfutils debuginfod server. Script functions can use probe USDcontext variables to access variables in the probed location, which allows the SystemTap scripts to use common logic to work with a variety of probes. The stapbpf program improvements, including try-catch statements, and error probes, have been made to enable proper error tolerance in scripts running on the BPF backend. For further information about notable changes, read the upstream release notes before updating. ( BZ#1804319 ) Valgrind rebased to version 3.16.0 The Valgrind executable code analysis tool has been updated to version 3.16.0, which provides a number of bug fixes and enhancements over the version: It is now possible to dynamically change the value of many command-line options while your program is running under Valgrind: through vgdb , through a gdb connected to the Valgrind gdbserver, or through program client requests. To get a list of dynamically changeable options, run the valgrind --help-dyn-options command. For the Cachegrind ( cg_annotate ) and Callgrind ( callgrind_annotate ) tools the --auto and --show-percs options now default to yes . The Memcheck tool produces fewer false positive errors on optimized code. In particular, Memcheck now better handles the case when the compiler transformed an A && B check into B && A , where B could be undefined and A was false. Memcheck also better handles integer equality checks and non-equality checks on partially defined values. The experimental Stack and Global Array Checking tool ( exp-sgcheck ) has been removed. An alternative for detecting stack and global array overruns is using the AddressSanitizer (ASAN) facility of GCC, which requires you to rebuild your code with the -fsanitize=address option. ( BZ#1804324 ) elfutils rebased to version 0.180 The elfutils package has been updated to version 0.180, which provides multiple bug fixes and enhancements. Notable changes include: Better support for debug info for code built with GCC LTO (link time optimization). The eu-readelf and libdw utilities now can read and handle .gnu.debuglto_ sections, and correctly resolve file names for functions that are defined across CUs (compile units). The eu-nm utility now explicitly identifies weak objects as V and common symbols as C . The debuginfod server can now index .deb archives and has a generic extension to add other package archive formats using the -Z EXT[=CMD] option. For example -Z '.tar.zst=zstdcat' indicates that archives ending with the .tar.zst extension should be unpacked using the zstdcat utility. The debuginfo-client tool has several new helper functions, such as debuginfod_set_user_data , debuginfod_get_user_data , debuginfod_get_url and debuginfod_add_http_header . It also supports file:// URLs now. ( BZ#1804321 ) GDB now supports process record and replay on IBM z15 With this enhancement, the GNU Debugger (GDB) now supports process record and replay with most of the new instructions of the IBM z15 processor (previously known as arch13). Note that the following instructions are currently not supported: SORTL (sort lists), DFLTCC (deflate conversion call), KDSA (compute digital signature authentication). (BZ#1659535) Marvell ThunderX2 performance monitoring events have been updated in papi With this enhancement, a number of performance events specific to ThunderX2, including uncore events, have been updated. As a result, developers can better investigate system performance on Marvell ThunderX2 systems. (BZ#1726070) The glibc math library is now optimized for IBM Z With this enhancement, the libm math functions were optimized to improve performance on IBM Z machines. Notable changes include: improved rounding mode handling to avoid superfluous floating point control register sets and extracts exploitation of conversion between z196 integer and float (BZ#1780204) An additional libffi-specific temporary directory is available now Previously on hardened systems, the system-wide temporary directories may not have had permissions suitable for use with the libffi library. With this enhancement, system administrators can now set the LIBFFI_TMPDIR environment variable to point to a libffi-specific temporary directory with both write and exec mount or selinux permissions. ( BZ#1723951 ) Improved performance of strstr() and strcasestr() With this update, the performance of the strstr() and strcasestr() functions has been improved across several supported architectures. As a result, users now benefit from significantly better performance of all applications using string and memory manipulation routines. (BZ#1821531) glibc now handles loading of a truncated locale archive correctly If the archive of system locales has been previously truncated, either due to a power outage during upgrade or a disk failure, a process could terminate unexpectedly when loading the archive. This enhancement adds additional consistency checks to the loading of the locale archive. As a result, processes are now able to detect archive truncation and fall back to either non-archive installed locales or the default POSIX locale. (BZ#1784525) GDB now supports debuginfod With this enhancement, the GNU Debugger (GDB) can now download debug information packages from centralized servers on demand using the elfutils debuginfod client library. ( BZ#1838777 ) pcp rebased to version 5.1.1-3 The pcp package has been upgraded to version 5.1.1-3. Notable changes include: Updated service units and improved systemd integration and reliability for all the PCP services. Improved archive log rotation and more timely compression. Archived discovery bug fixes in the pmproxy protocol. Improved pcp-atop , pcp-dstat , pmrep , and related monitor tools along with metric labels reporting in the pmrep and export tools. Improved bpftrace , OpenMetrics , MMV, the Linux kernel agent, and other collection agents. New metric collectors for the Open vSwitch and RabbitMQ servers. New host discovery pmfind systemd service, which replaces the standalone pmmgr daemon. ( BZ#1792971 ) grafana rebased to version 6.7.3 The grafana package has been upgraded to version 6.7.3. Notable changes include: Generic OAuth role mapping support A new logs panel Multi-line text display in the table panel A new currency and energy units ( BZ#1807323 ) grafana-pcp rebased to version 2.0.2 The grafana-pcp package has been upgraded to version 2.0.2. Notable changes include: Supports the multidimensional eBPF maps to be graphed in the flamegraph. Removes an auto-completion cache in the query editor, so that the PCP metrics can appear dynamically. ( BZ#1807099 ) A new rhel8/pcp container image The rhel8/pcp container image is now available in the Red Hat Container Registry. The image contains the Performance Co-Pilot (PCP) toolkit, which includes preinstalled pcp-zeroconf package and the OpenMetrics PMDA. (BZ#1497296) A new rhel8/grafana container image The rhel8/grafana container image is now available in the Red Hat Container Registry. Grafana is an open source utility with metrics dashboard, and graph editor for the Graphite , Elasticsearch , OpenTSDB , Prometheus , InfluxDB , and PCP monitoring tool. ( BZ#1823834 ) 5.1.13. Identity Management IdM backup utility now checks for required replica roles The ipa-backup utility now checks if all of the services used in the IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA) are installed on the replica where you are running the backup. If the replica does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. For more information, see Preparing for data loss with IdM backups . ( BZ#1810154 ) New password expiration notification tool Expiring Password Notification (EPN), provided by the ipa-client-epn package, is a standalone tool you can use to build a list of Identity Management (IdM) users whose passwords are expiring soon. IdM administrators can use EPN to: Display a list of affected users in JSON format, which is calculated at runtime Calculate how many emails will be sent for a given day or date range Send password expiration email notifications to users Red Hat recommends launching EPN once a day from an IdM client or replica with the included ipa-epn.timer systemd timer. (BZ#913799) JSS now provides a FIPS-compliant SSLContext Previously, Tomcat used the SSLEngine directive from the Java Cryptography Architecture (JCA) SSLContext class. The default SunJSSE implementation is not compliant with the Federal Information Processing Standard (FIPS), therefore PKI now provides a FIPS-compliant implementation via JSS. ( BZ#1821851 ) Checking the overall health of your public key infrastructure is now available With this update, the public key infrastructure (PKI) Healthcheck tool reports the health of the PKI subsystem to the Identity Management (IdM) Healthcheck tool, which was introduced in RHEL 8.1. Executing the IdM Healthcheck invokes the PKI Healthcheck, which collects and returns the health report of the PKI subsystem. The pki-healthcheck tool is available on any deployed RHEL IdM server or replica. All the checks provided by pki-healthcheck are also integrated into the ipa-healthcheck tool. ipa-healthcheck can be installed separately from the idm:DL1 module stream. Note that pki-healthcheck can also work in a standalone Red Hat Certificate System (RHCS) infrastructure. (BZ#1770322) Support for RSA PSS With this enhancement, PKI now supports the RSA PSS (Probabilistic Signature Scheme) signing algorithm. To enable this feature, set the following line in the pkispawn script file for a given subsystem: pki_use_pss_rsa_signing_algorithm=True As a result, all existing default signing algorithms for this subsystem (specified in its CS.cfg configuration file) will use the corresponding PSS version. For example, SHA256withRSA becomes SHA256withRSA/PSS ( BZ#1824948 ) Directory Server exports the private key and certificate to a private name space when the service starts Directory Server uses OpenLDAP libraries for outgoing connections, such as replication agreements. Because these libraries cannot access the network security services (NSS) database directly, Directory Server extracts the private key and certificates from the NSS database on instances with TLS encryption support to enable the OpenLDAP libraries to establish encrypted connections. Previously, Directory Server extracted the private key and certificates to the directory set in the nsslapd-certdir parameter in the cn=config entry (default: /etc/dirsrv/slapd-<instance_name>/ ). As a consequence, Directory Server stored the Server-Cert-Key.pem and Server-Cert.pem in this directory. With this enhancement, Directory Server extracts the private key and certificate to a private name space that systemd mounts to the /tmp/ directory. As a result, the security has been increased. ( BZ#1638875 ) Directory Server can now turn an instance to read-only mode if the disk monitoring threshold is reached This update adds the nsslapd-disk-monitoring-readonly-on-threshold parameter to the cn=config entry. If you enable this setting, Directory Server switches all databases to read-only if disk monitoring is enabled and the free disk space is lower than the value you configured in nsslapd-disk-monitoring-threshold . With nsslapd-disk-monitoring-readonly-on-threshold set to on , the databases cannot be modified until Directory Server successfully shuts down the instance. This can prevent data corruption. (BZ#1728943) samba rebased to version 4.12.3 The samba packages have been upgraded to upstream version 4.12.3, which provides a number of bug fixes and enhancements over the version: Built-in cryptography functions have been replaced with GnuTLS functions. This improves the server message block version 3 (SMB3) performance and copy speed significantly. The minimum runtime support is now Python 3.5. The write cache size parameter has been removed because the write cache concept could reduce the performance on memory-constrained systems. Support for authenticating connections using Kerberos tickets with DES encryption types has been removed. The vfs_netatalk virtual file system (VFS) module has been removed. The ldap ssl ads parameter is marked as deprecated and will be removed in a future Samba version. For information about how to alternatively encrypt LDAP traffic and further details, see the samba: removal of "ldap ssl ads" smb.conf option solution. By default, Samba on RHEL 8.3 no longer supports the deprecated RC4 cipher suite. If you run Samba as a domain member in an AD that still requires RC4 for Kerberos authentication, use the update-crypto-policies --set DEFAULT:AD-SUPPORT command to enable support for the RC4 encryption type. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the database files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. ( BZ#1817557 ) cockpit-session-recording rebased to version 4 The cockpit-session-recording module has been rebased to version 4. This version provides following notable changes over the version: Updated parent id in the metainfo file. Updated package manifest. Fixed rpmmacro to resolve correct path on CentOS7. Handled byte-array encoded journal data. Moved code out of deprecated React lifecycle functions. ( BZ#1826516 ) krb5 rebased to version 1.18.2 The krb5 packages have been upgraded to upstream version 1.18.2. Notable fixes and enhancements include: Single- and triple-DES encryption types have been removed. Draft 9 PKINIT has been removed as it is not needed for any of the supported versions of Active Directory. NegoEx mechanism plug-ins are now supported. Hostname canonicalization fallback is now supported ( dns_canonicalize_hostname = fallback ). (BZ#1802334) IdM now supports new Ansible management modules This update introduces several ansible-freeipa modules for automating common Identity Management (IdM) tasks using Ansible playbooks: The config module allows setting global configuration parameters within IdM. The dnsconfig module allows modifying global DNS configuration. The dnsforwardzone module allows adding and removing DNS forwarders from IdM. The dnsrecord allows the management of DNS records. In contrast to the upstream ipa_dnsrecord , it allows multiple record management in one execution, and it supports more record types. The dnszone module allows configuring zones in the DNS server. The service module allows ensuring the presence and absence of services. The vault module allows ensuring the presence and absence of vaults and of the members of vaults. Note that the ipagroup and ipahostgroup modules have been extended to include user and host group membership managers, respectively. A group membership manager is a user or a group that can add members to a group or remove members from a group. For more information, see the Variables sections of the respective /usr/share/doc/ansible-freeipa/README-* files. (JIRA:RHELPLAN-49954) IdM now supports a new Ansible system role for certificate management Identity Management (IdM) supports a new Ansible system role for automating certificate management tasks. The new role includes the following benefits: The role helps automate the issuance and renewal of certificates. The role can be configured to have the ipa certificate authority issue your certificates. In this way, you can use your existing IdM infrastructure to manage the certificate trust chain. The role allows you to specify the commands to be executed before and after a certificate is issued, for example the stopping and starting of services. (JIRA:RHELPLAN-50002) Identity Management now supports FIPS With this enhancement, you can now use encryption types that are approved by the Federal Information Processing Standard (FIPS) with the authentication mechanisms in Identity Management (IdM). Note that a cross-forest trust between IdM and Active Directory is not FIPS compliant. Customers who require FIPS but do not require an AD trust can now install IdM in FIPS mode. (JIRA:RHELPLAN-43531) OpenDNSSEC in idm:DL1 rebased to version 2.1 The OpenDNSSEC component of the idm:DL1 module stream has been upgraded to the 2.1 version series, which is the current long term upstream support version. OpenDNSSEC is an open source project driving the adoption of Domain Name System Security Extensions (DNSSEC) to further enhance Internet security. OpenDNSSEC 2.1 provides a number of bug fixes and enhancements over the version. For more information, read the upstream release notes: https://www.opendnssec.org/archive/releases/ (JIRA:RHELPLAN-48838) IdM now supports the deprecated RC4 cipher suite with a new system-wide cryptographic subpolicy This update introduces the new AD-SUPPORT cryptographic subpolicy that enables the Rivest Cipher 4 (RC4) cipher suite in Identity Management (IdM). As an administrator in the context of IdM-Active Directory (AD) cross-forest trusts, you can activate the new AD-SUPPORT subpolicy when AD is not configured to use Advanced Encryption Standard (AES). More specifically, Red Hat recommends enabling the new subpolicy if one of the following conditions applies: The user or service accounts in AD have RC4 encryption keys and lack AES encryption keys. The trust links between individual Active Directory domains have RC4 encryption keys and lack AES encryption keys. To enable the AD-SUPPORT subpolicy in addition to the DEFAULT cryptographic policy, enter: Alternatively, to upgrade trusts between AD domains in an AD forest so that they support strong AES encryption types, see the following Microsoft article: AD DS: Security: Kerberos "Unsupported etype" error when accessing a resource in a trusted domain . (BZ#1851139) Adjusting to new Microsoft LDAP channel binding and LDAP signing requirements With recent Microsoft updates, Active Directory (AD) flags the clients that do not use the default Windows settings for LDAP channel binding and LDAP signing. As a consequence, RHEL systems that use the System Security Services Daemon (SSSD) for direct or indirect integration with AD might trigger error Event IDs in AD upon successful Simple Authentication and Security Layer (SASL) operations that use the Generic Security Services Application Program Interface (GSSAPI). To prevent these notifications, configure client applications to use the Simple and Protected GSSAPI Negotiation Mechanism (GSS-SPNEGO) SASL mechanism instead of GSSAPI. To configure SSSD, set the ldap_sasl_mech option to GSS-SPNEGO . Additionally, if channel binding is enforced on the AD side, configure any systems that use SASL with SSL/TLS in the following way: Install the latest versions of the cyrus-sasl , openldap and krb5-libs packages that are shipped with RHEL 8.3 and later. In the /etc/openldap/ldap.conf file, specify the correct channel binding type by setting the SASL_CBINDING option to tls-endpoint . For more information, see Impact of Microsoft Security Advisory ADV190023 | LDAP Channel Binding and LDAP Signing on RHEL and AD integration . ( BZ#1873567 ) SSSD, adcli, and realmd now support the deprecated RC4 cipher suite with a new system-wide cryptographic subpolicy This update introduces the new AD-SUPPORT cryptographic subpolicy that enables the Rivest Cipher 4 (RC4) cipher suite for the following utilities: the System Security Services Daemon (SSSD) adcli realmd As an administrator, you can activate the new AD-SUPPORT subpolicy when Active Directory (AD) is not configured to use Advanced Encryption Standard (AES) in the following scenarios: SSSD is used on a RHEL system connected directly to AD. adcli is used to join an AD domain or to update host attributes, for example the host key. realmd is used to join an AD domain. Red Hat recommends enabling the new subpolicy if one of the following conditions applies: The user or service accounts in AD have RC4 encryption keys and lack AES encryption keys. The trust links between individual Active Directory domains have RC4 encryption keys and lack AES encryption keys. To enable the AD-SUPPORT subpolicy in addition to the DEFAULT cryptographic policy, enter: ( BZ#1866695 ) authselect has a new minimal profile The authselect utility has a new minimal profile. You can use this profile to serve only local users and groups directly from system files instead of using other authentication providers. Therefore, you can safely remove the SSSD , winbind , and fprintd packages and can use this profile on systems that require minimal installation to save disk and memory space. (BZ#1654018) SSSD now updates Samba's secrets.tdb file when rotating a password A new ad_update_samba_machine_account_password option in the sssd.conf file is now available in RHEL. You can use it to set SSSD to automatically update the Samba secrets.tdb file when rotating a machine's domain password while using Samba. However, if SELinux is in enforcing mode, SSSD fails to update the secrets.tdb file. Consequently, Samba does not have access to the new password. To work around this problem, set SELinux to permissive mode. ( BZ#1793727 ) SSSD now enforces AD GPOs by default The default setting for the SSSD option ad_gpo_access_control is now enforcing . In RHEL 8, SSSD enforces access control rules based on Active Directory Group Policy Objects (GPOs) by default. Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. If you would not like to enforce GPOs, change the value of the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to permissive . (JIRA:RHELPLAN-51289) Directory Server now supports the pwdReset operation attribute This enhancement adds support for the pwdReset operation attribute to Directory Server. When an administrator changes the password of a user, Directory Server sets pwdReset in the user's entry to true . As a result, applications can use this attribute to identify if a password of a user has been reset by an administrator. Note that pwdReset is an operational attribute and, therefore, users cannot edit it. ( BZ#1775285 ) Directory Server now logs the work and operation time in RESULT entries With this update, Directory Server now logs two additional time values in RESULT`entries in the `/var/log/dirsrv/slapd-<instance_name>/access file: The wtime value indicates how long it took for an operation to move from the work queue to a worker thread. The optime value shows the time the actual operation took to be completed once a worker thread started the operation. The new values provide additional information about how the Directory Server handles load and processes operations. For further details, see the Access Log Reference section in the Red Hat Directory Server Configuration, Command, and File Reference. ( BZ#1850275 ) 5.1.14. Desktop Single-application session is now available You can now start GNOME in a single-application session, also known as kiosk mode. In this session, GNOME displays only a full-screen window of an application that you have configured. To enable the single-application session: Install the gnome-session-kiosk-session package: Create and edit the USDHOME/.local/bin/redhat-kiosk file of the user that will open the single-application session. In the file, enter the executable name of the application that you want to launch. For example, to launch the Text Editor application: Make the file executable: At the GNOME login screen, select the Kiosk session from the cogwheel button menu and log in as the single-application user. (BZ#1739556) tigervnc has been rebased to version 1.10.1 The tigervnc suite has been rebased to version 1.10.1. The update contains number of fixes and improvements. Most notably: tigervnc now only supports starting of the virtual network computing (VNC) server using the systemd service manager. The clipboard now supports full Unicode in the native viewer, WinVNC and Xvnc/libvnc.so. The native client will now respect the system trust store when verifying server certificates. The Java web server has been removed. x0vncserver can now be configured to only allow local connections. x0vncserver has received fixes for when only part of the display is shared. Polling is now default in WinVNC . Compatibility with VMware's VNC server has been improved. Compatibility with some input methods on macOS has been improved. Automatic "repair" of JPEG artefacts has been improved. ( BZ#1806992 ) 5.1.15. Graphics infrastructures Support for new graphics cards The following graphics cards are now fully supported: The AMD Navi 14 family, which includes the following models: Radeon RX 5300 Radeon RX 5300 XT Radeon RX 5500 Radeon RX 5500 XT The AMD Renoir APU family, which includes the following models: Ryzen 3 4300U Ryzen 5 4500U, 4600U, and 4600H Ryzen 7 4700U, 4800U, and 4800H The AMD Dali APU family, which includes the following models: Athlon Silver 3050U Athlon Gold 3150U Ryzen 3 3250U Additionally, the following graphics drivers have been updated: The Matrox mgag200 driver (JIRA:RHELPLAN-55009) Hardware acceleration with Nvidia Volta and Turing The nouveau graphics driver now supports hardware acceleration with the Nvidia Volta and Turing GPU families. As a result, the desktop and applications that use 3D graphics now render efficiently on the GPU. Additionally, this frees the CPU for other tasks and improves the overall system responsiveness. (JIRA:RHELPLAN-57564) Reduced display tearing on XWayland The XWayland display back end now enables the XPresent extension. Using XPresent, applications can efficiently update their window content, which reduces display tearing. This feature significantly improves the user interface rendering of full-screen OpenGL applications, such as 3D editors. (JIRA:RHELPLAN-57567) Intel Tiger Lake GPUs are now supported This update adds support for the Intel Tiger Lake family of GPUs. This includes Intel UHD Graphics and Intel Xe GPUs found with the following CPU models: https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html . You no longer have to set the i915.alpha_support=1 or i915.force_probe=* kernel option to enable Tiger Lake GPU support. This enhancement was released as part of the RHSA-2021:0558 asynchronous advisory. (BZ#1882620) 5.1.16. The web console Setting privileges from within the web console session With this update the web console provides an option to switch between administrative access and limited access from inside of a user session. You can switch between the modes by clicking the Administrative access or Limited access indicator in your web console session. (JIRA:RHELPLAN-42395) Improvements to logs searching With this update, the web console introduces a search box that supports several new ways of how the users can search among logs. The search box supports regular expression searching in log messages, specifying service or searching for entries with specific log fields. ( BZ#1710731 ) Overview page shows more detailed Insights reports With this update, when a machine is connected to Red Hat Insights, the Health card in the Overview page in the web console shows more detailed information about number of hits and their priority. (JIRA:RHELPLAN-42396) 5.1.17. Red Hat Enterprise Linux system roles Terminal log role added to RHEL system roles With this enhancement, a new Terminal log (TLOG) role has been added to RHEL system roles shipped with the rhel-system-roles package. Users can now use the tlog role to setup and configure session recording using Ansible. Currently, the tlog role supports the following tasks: Configure tlog to log recording data to the systemd journal Enable session recording for explicit users and groups, via SSSD ( BZ#1822158 ) RHEL Logging system role is now available for Ansible With the Logging system role, you can deploy various logging configurations consistently on local and remote hosts. You can configure a RHEL host as a server to collect logs from many client systems. ( BZ#1677739 ) rhel-system-roles-sap fully supported The rhel-system-roles-sap package, previously available as a Technology Preview, is now fully supported. It provides Red Hat Enterprise Linux (RHEL) system roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription. The following new roles in the rhel-system-roles-sap package are fully supported: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure For more information, see Red Hat Enterprise Linux system roles for SAP . (BZ#1660832) The metrics RHEL system role is now available for Ansible. With the metrics RHEL system role, you can configure, for local and remote hosts: performance analysis services via the pcp application visualisation of this data using a grafana server querying of this data using the redis data source without having to manually configure these services separately. ( BZ#1890499 ) rhel-system-roles-sap upgraded The rhel-system-roles-sap packages have been upgraded to upstream version 2.0.0, which provides multiple bug fixes and enhancements. Notable changes include: Improve hostname configuration and checking Improve uuidd status detection and handling Add support for the --check (-c) option Increase nofile limits from 32800 to 65536 Add the nfs-utils file to sap_preconfigure_packages * Disable firewalld . With this change we disable firewalld only when it is installed. Add minimum required versions of the setup package for RHEL 8.0 and RHEL 8.1. Improve the tmpfiles.d/sap.conf file handling Support single step execution or checking of SAP notes Add the required compat-sap-c++ packages Improve minimum package installation handling Detect if a reboot is required after applying the RHEL system roles Support setting any SElinux state. Default state is "disabled" No longer fail if there is more than one line with identical IP addresses No longer modify /etc/hosts if there is more than one line containing sap_ip Support for HANA on RHEL 7.7 Support for adding a repository for the IBM service and productivity tools for Power, required for SAP HANA on the ppc64le platform ( BZ#1844190 ) The storage RHEL system role now supports file system management With this enhancement, administrators can use the storage RHEL system role to: resize an ext4 file resize a LVM file create a swap partition, if it does not exist, or to modify the swap partition, if it already exists, on a block device using the default parameters. (BZ#1959289) 5.1.18. Virtualization Migrating a virtual machine to a host with incompatible TSC setting now fails faster Previously, migrating a virtual machine to a host with incompatible Time Stamp Counter (TSC) setting failed late in the process. With this update, attempting such a migration generates an error before the migration process starts. (JIRA:RHELPLAN-45950) Virtualization support for 2nd generation AMD EPYC processors With this update, virtualization on RHEL 8 adds support for the 2nd generation AMD EPYC processors, also known as EPYC Rome. As a result, virtual machines hosted on RHEL 8 can now use the EPYC-Rome CPU model and utilise new features that the processors provide. (JIRA:RHELPLAN-45959) New command: virsh iothreadset This update introduces the virsh iothreadset command, which can be used to configure dynamic IOThread polling. This makes it possible to set up virtual machines with lower latencies for I/O-intensive workloads at the expense of greater CPU consumption for the IOThread. For specific options, see the virsh man page. (JIRA:RHELPLAN-45958) UMIP is now supported by KVM on 10th generation Intel Core processors With this update, the User-mode Instruction Prevention (UMIP) feature is now supported by KVM for hosts running on 10th generation Intel Core processors, also known as Ice Lake Servers. The UMIP feature issues a general protection exception if certain instructions, such as sgdt , sidt , sldt , smsw , and str , are executed when the Current Privilege Level (CPL) is greater than 0. As a result, UMIP ensures system security by preventing unauthorized applications from accessing certain system-wide settings which can be used to initiate privilege escalation attacks. (JIRA:RHELPLAN-45957) The libvirt library now supports Memory Bandwidth Allocation libvirt now supports Memory Bandwidth Allocation (MBA). With MBA, you can allocate parts of host memory bandwidth in vCPU threads by using the <memorytune> element in the <cputune> section. MBA is an extension of the existing Cache QoS Enforcement (CQE) feature found in the Intel Xeon v4 processors, also known as Broadwell server. For tasks that are associated with the CPU affinity, the mechanism used by MBA is the same as in CQE. (JIRA:RHELPLAN-45956) RHEL 6 virtual machines now support the Q35 machine type Virtual machines (VMs) hosted on RHEL 8 that use RHEL 6 as their guest OS can now use Q35, a more modern PCI Express-based machine type. This provides a variety of improvements in features and performance of virtual devices, and ensures that a wider range of modern devices are compatible with RHEL 6 VMs. (JIRA:RHELPLAN-45952) All logged QEMU events now have a time stamp. As a result, users can more easily troubleshoot their virtual machines using logs saved in the /var/log/libvirt/qemu/ directory. QEMU logs now include time stamps for spice-server events This update adds time stamps to`spice-server` event logs. Therefore, all logged QEMU events now have a time stamp. As a result, users can more easily troubleshoot their virtual machines using logs saved in the /var/log/libvirt/qemu/ directory. (JIRA:RHELPLAN-45945) The bochs-display device is now supported RHEL 8.3 and later introduce the Bochs display device, which is more secure than the currently used stdvga device. Note that all virtual machines (VMs) compatible with bochs-display will use it by default. This mainly includes VMs that use the UEFI interface. (JIRA:RHELPLAN-45939) Optimized MDS protection for virtual machines With this update, a RHEL 8 host can inform its virtual machines (VMs) whether they are vulnerable to Microarchitectural Data Sampling (MDS). VMs that are not vulnerable do not use measures against MDS, which improves their performance. (JIRA:RHELPLAN-45937) Creating QCOW2 disk images on RBD now supported With this update, it is possible to create QCOW2 disk images on RADOS Block Device (RBD) storage. As a result, virtual machines can use RBD servers for their storage back ends with QCOW2 images. Note, however, that the write performance of QCOW2 disk images on RBD storage is currently lower than intended. (JIRA:RHELPLAN-45936) Maximum supported VFIO devices increased to 64 With this update, you can attach up to 64 PCI devices that use VFIO to a single virtual machine on a RHEL 8 host. This is up from 32 in RHEL 8.2 and prior. (JIRA:RHELPLAN-45930) discard and write-zeroes commands are now supported in QEMU/KVM With this update, the discard and write-zeroes commands for virtio-blk are now supported in QEMU/KVM. As a result, virtual machines can use the virtio-blk device to discard unused sectors of an SSD, fill sectors with zeroes when they are emptied, or both. This can be used to increase SSD performance or to ensure that a drive is securely erased. (JIRA:RHELPLAN-45926) RHEL 8 now supports IBM POWER 9 XIVE This update introduces support for the External Interrupt Virtualization Engine (XIVE) feature of IBM POWER9 to RHEL 8. As a result, virtual machines (VMs) running on a RHEL 8 hypervisor on an IBM POWER 9 system can use XIVE, which improves the performance of I/O-intensive VMs. (JIRA:RHELPLAN-45922) Control Group v2 support for virtual machines With this update, the libvirt suite supports control groups v2. As a result, virtual machines hosted on RHEL 8 can take advantage of resource control capabilities of control group v2. (JIRA:RHELPLAN-45920) Paravirtualized IPIs are now supported for Windows virtual machines With this update, the hv_ipi flag has been added to the supported hypervisor enlightenments for Windows virtual machines (VMs). This allows inter-processor interrupts (IPIs) to be sent via a hypercall. As a result, IPIs can be performed faster on VMs running a Windows OS. (JIRA:RHELPLAN-45918) Migrating virtual machines with enabled disk cache is now possible This update makes the RHEL 8 KVM hypervisor compatible with disk cache live migration. As a result, it is now possible to live-migrate virtual machines with disk cache enabled. (JIRA:RHELPLAN-45916) macvtap interfaces can now be used by virtual machines in non-privileged sessions It is now possible for virtual machines (VMs) to use a macvtap interface previously created by a privileged process. Notably, this enables VMs started by the non-privileged user session of libvirtd to use a macvtap interface. To do so, first create a macvtap interface in a privileged environment and set it to be owned by the user who will be running libvirtd in a non-privileged session. You can do this using a management application such as the web console, or using command-line utilities as root, for example: Afterwards, modify the <target> sub-element of the VM's <interface> configuration to reference the newly created macvtap interface: With this configuration, if libvirtd is run as the user myuser , the VM will use the existing macvtap interface when started. (JIRA:RHELPLAN-45915) Virtual machines can now use features of 10th generation Intel Core processors The Icelake-Server and Icelake-Client CPU model names are now available for virtual machines (VMs). On hosts with 10th generation Intel Core processors, using Icelake-Server or Icelake-Client as the CPU type in the XML configuration of a VM makes new features of these CPUs exposed to the VM. (JIRA:RHELPLAN-45911) QEMU now supports LUKS encryption With this update, it is possible to create virtual disks using Linux Unified Key Setup (LUKS) encryption. You can encrypt the disks when creating the storage volume by including the <encryption> field in the virtual machine's (VM) XML configuration. You can also make the LUKS encrypted virtual disk completely transparent to the VM by including the <encryption> field in the disk's domain definition in the XML configuration file. (JIRA:RHELPLAN-45910) Improved logs for nbdkit The nbdkit service logging has been modified to be less verbose. As a result, nbdkit logs only potentially important messages, and the logs created during virt-v2v conversions are shorter and easier to parse. (JIRA:RHELPLAN-45909) Improved consistency for virtual machines SELinux security labels and permissions With this update, the libvirt service can record SELinux security labels and permissions associated with files, and restore the labels after modifying the files. As a result, for example, using libguestfs utilities to modify a virtual machine (VM) disk image owned by a specific user no longer changes the image owner to root. Note that this feature does not work on file systems that do not support extended file attributes, such as NFS. (JIRA:RHELPLAN-45908) QEMU now uses the gcrypt library for XTS ciphers With this update, the QEMU emulator has been changed to use the XTS cipher mode implementation provided by the gcrypt library. This improves the I/O performance of virtual machines whose host storage uses QEMU's native luks encryption driver. (JIRA:RHELPLAN-45904) Windows Virtio drivers can now be updated using Windows Updates With this update, a new standard SMBIOS string is initiated by default when QEMU starts. The parameters provided in the SMBIOS fields make it possible to generate IDs for the virtual hardware running on the virtual machine(VM). As a result, Windows Update can identify the virtual hardware and the RHEL hypervisor machine type, and update the Virtio drivers on VMs running Windows 10+, Windows Server 2016, and Windows Server 2019+. (JIRA:RHELPLAN-45901) New command: virsh guestinfo The virsh guestinfo command has been introduced to RHEL 8.3. This makes it possible to report the following types of information about a virtual machine (VM): Guest OS and file system information Active users The time zone used Before running virsh guestinfo , ensure that the qemu-guest-agent package is installed. In addition, the guest_agent channel must be enabled in the VM's XML configuration, for example as follows: (JIRA:RHELPLAN-45900) VNNI for BFLOAT16 inputs are now supported by KVM With this update, Vector Neural Network Instructions (VNNI) supporting BFLOAT16 inputs, also known as AVX512_BF16 instructions, are now supported by KVM for hosts running on the 3rd Gen Intel Xeon scalable processors, also known as Cooper Lake. As a result, guest software can now use the AVX512_BF16 instructions inside virtual machines, by enabling it in the virtual CPU configuration. (JIRA:RHELPLAN-45899) New command: virsh pool-capabilities RHEL 8.3 introduces the virsh pool-capabilities command option. This command displays information that can be used for creating storage pools, as well as storage volumes within each pool, on your host. This includes: Storage pool types Storage pool source formats Target storage volume format types (JIRA:RHELPLAN-45884) Support for CPUID.1F in virtual machines with Intel Xeon Platinum 9200 series processors With this update, virtual machines hosted on RHEL 8 can be configured with a virtual CPU topology of multiple dies, using the Extended Topology Enumeration leaf feature (CPUID.1F). This feature is supported by Intel Xeon Platinum 9200 series processors, previously known as Cascade Lake. As a result, it is now possible on hosts that use Intel Xeon Platinum 9200 series processors to create a vCPU topology that mirrors the physical CPU topology of the host. (JIRA:RHELPLAN-37573, JIRA:RHELPLAN-45934) Virtual machines can now use features of 3rd Generation Intel Xeon Scalable Processors The Cooperlake CPU model name is now available for virtual machines (VMs). Using Cooperlake as the CPU type in the XML configuration of a VM makes new features from the 3rd Generation Intel Xeon Scalable Processors exposed to the VM, if the host uses this CPU. (JIRA:RHELPLAN-37570) Intel Optane persistent memory now supported by KVM With this update, virtual machines hosted on RHEL 8 can benefit from the Intel Optane persistent memory technology, previously known as Intel Crystal Ridge. Intel Optane persistent memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. (JIRA:RHELPLAN-14068) Virtual machines can now use Intel Processor Trace With this update, virtual machines (VMs) hosted on RHEL 8 are able to use the Intel Processor Trace (PT) feature. When your host uses a CPU that supports Intel PT, you can use specialized Intel software to collect a variety of metrics about the performance of your VM's CPU. Note that this also requires enabling the intel-pt feature in the XML configuration of the VM. (JIRA:RHELPLAN-7788) DASD devices can now be assigned to virtual machines on IBM Z Direct-access storage devices (DASDs) provide a number of specific storage features. Using the vfio-ccw feature, you can assign DASDs as mediated devices to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to access a z/OS dataset, or to share the assigned DASDs with a z/OS machine. (JIRA:RHELPLAN-40234) IBM Secure Execution supported for IBM Z When using IBM Z hardware to run your RHEL 8 host, you can improve the security of your virtual machines (VMs) by configuring IBM Secure Execution for the VMs. IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing a VM's state and memory contents. As a result, even if the host is compromised, it cannot be used as a vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent untrusted hosts from obtaining sensitive information from the VM. (JIRA:RHELPLAN-14754) 5.1.19. RHEL in cloud environments cloud-utils-growpart rebased to 0.31 The cloud-utils-growpart package has been upgraded to version 0.31, which provides multiple bug fixes and enhancements. Notable changes include: A bug that prevented GPT disks from being grown past 2TB has been fixed. The growpart operation no longer fails when the start sector and size are the same. Resizing a partition using the sgdisk utility previously in some cases failed. This problem has now been fixed. ( BZ#1846246 ) 5.1.20. Containers skopeo container image is now available The registry.redhat.io/rhel8/skopeo container image is a containerized implementation of the skopeo package. The skopeo tool is a command-line utility that performs various operations on container images and image repositories. This container image allows you to inspect container images in a registry, to remove a container image from a registry, and to copy container images from one unauthenticated container registry to another. To pull the registry.redhat.io/rhel8/skopeo container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627900 ) buildah container image is now available The registry.redhat.io/rhel8/buildah container image is a containerized implementation of the buildah package. The buildah tool facilitates building OCI container images. This container image allows you to build container images without the need to install the buildah package on your system. The use-case does not cover running this image in rootless mode as a non-root user. To pull the registry.redhat.io/rhel8/buildah container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627898 ) Podman v2.0 RESTful API is now available The new REST based Podman 2.0 API replaces the old remote API based on the varlink library. The new API works in both a rootful and a rootless environment and provides a docker compatibility layer. (JIRA:RHELPLAN-37517) Installing Podman does not require container-selinux With this enhancement, the installation of the container-selinux package is now optional during the container build. As a result, Podman has fewer dependencies on other packages. ( BZ#1806044 ) 5.2. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 8.3. These changes could include for example added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters acpi_no_watchdog = [HW,ACPI,WDT] This parameter enables to ignore the Advanced Configuration and Power Interface (ACPI) based watchdog interface (WDAT) and let the native driver control the watchdog device instead. dfltcc = [HW,S390] This parameter configures the zlib hardware support for IBM Z architectures. Format: { on | off | def_only | inf_only | always } The options are: on (default) - IBM Z zlib hardware support for compression on level 1 and decompression off - No IBM Z zlib hardware support def_only - IBM Z zlib hardware support for the deflate algorithm only (compression on level 1) inf_only - IBM Z zlib hardware support for the inflate algorithm only (decompression) always - Similar as on , but ignores the selected compression level and always uses hardware support (used for debugging) irqchip.gicv3_pseudo_nmi = [ARM64] This parameter enables support for pseudo non-maskable interrupts (NMIs) in the kernel. To use this parameter you need to build the kernel with the CONFIG_ARM64_PSEUDO_NMI configuration item. panic_on_taint = Bitmask for conditionally calling panic() in add_taint() Format: <hex>[, nousertaint ] A hexadecimal bitmask which represents a set of TAINT flags that will cause the kernel to panic when the add_taint() system call is invoked with any of the flags in this set. The optional nousertaint switch prevents userspace-forced crashes by writing to the /proc/sys/kernel/tainted file any flagset that matches the bitmask in panic_on_taint . For for more information see the upstream documentation . prot_virt = [S390] Format: <bool> This parameter enables hosting of protected virtual machines which are isolated from the hypervisor if the hardware support is present. rcutree.use_softirq = [KNL] This parameter enables elimination of Tree-RCU softirq processing. If you set this parameter to zero, it moves all RCU_SOFTIRQ processing to per-CPU rcuc kthreads. If you set rcutree.use_softirq to a non-zero value (default), RCU_SOFTIRQ is used by default. Specify rcutree.use_softirq=0 to use rcuc kthreads. split_lock_detect = [X86] This parameter enables the split lock detection. When enabled, and if hardware support is present, atomic instructions that access data across cache line boundaries will result in an alignment check exception. The options are: off - not enabled warn - the kernel will emit rate limited warnings about applications that trigger the Alignment Check Exception (#AC). This mode is the default on CPUs that supports split lock detection. fatal - the kernel will send Buss error (SIGBUS) signal to applications that trigger the #AC exception. If the #AC exception is hit while not executing in the user mode, the kernel will issue an oops error in either the warn or fatal mode. srbds = [X86,INTEL] This parameter controls the Special Register Buffer Data Sampling (SRBDS) mitigation. Certain CPUs are vulnerable to a Microarchitectural Data Sampling (MDS)-like exploit which can leak bits from the random number generator. By default, microcode mitigates this issue. However, the microcode fix can cause the RDRAND and RDSEED instructions to become much slower. Among other effects, this will result in reduced throughput from the urandom kernel random number source device. To disable the microcode mitigation, set the following option: off - Disable mitigation and remove performance impact to RDRAND and RDSEED svm = [PPC] Format: { on | off | y | n | 1 | 0 } This parameter controls the use of the Protected Execution Facility on pSeries systems. nopv = [X86,XEN,KVM,HYPER_V,VMWARE] This parameter disables the PV optimizations which forces the guest to run as generic guest with no PV drivers. Currently supported are XEN HVM, KVM, HYPER_V and VMWARE guests. Updated kernel parameters hugepagesz = [HW] This parameter specifies a huge page size. Use this parameter in conjunction with the hugepages parameter to pre-allocate a number of huge pages of the specified size. Specify the hugepagesz and hugepages parameters in pairs such as: The hugepagesz parameter can only be specified once on the command line for a specific huge page size. Valid huge page sizes are architecture dependent. hugepages = [HW] This parameter specifies the number of huge pages to pre-allocate. This parameter typically follows the valid hugepagesz or default_hugepagesz parameter. However, if hugepages is the first or the only HugeTLB command-line parameter, it implicitly specifies the number of huge pages of the default size to allocate. If the number of huge pages of the default size is implicitly specified, it can not be overwritten by the hugepagesz + hugepages parameter pair for the default size. For example, on an architecture with 2M default huge page size: Settings from the example above results in allocation of 256 2M huge pages and a warning message that the hugepages=512 parameter was ignored. If hugepages is preceded by invalid hugepagesz , hugepages will be ignored. default_hugepagesz = [HW] This parameter specifies the default huge page size. You can specify default_hugepagesz only once on the command-line. Optionally, you can follow default_hugepagesz with the hugepages parameter to pre-allocate a specific number of huge pages of the default size. Also, you can implicitly specify the number of default-sized huge pages to pre-allocate. For example, on an architecture with 2M default huge page size: Settings from the example above all results in allocation of 256 2M huge pages. Valid default huge page size is architecture dependent. efi = [EFI] Format: { "old_map", "nochunk", "noruntime", "debug", "nosoftreserve" } The options are: old_map [X86-64] - Switch to the old ioremap-based EFI runtime services mapping. 32-bit still uses this one by default nochunk - Disable reading files in "chunks" in the EFI boot stub, as chunking can cause problems with some firmware implementations noruntime - Disable EFI runtime services support debug - Enable miscellaneous debug output nosoftreserve - The EFI_MEMORY_SP (Specific Purpose) attribute sometimes causes the kernel to reserve the memory range for a memory mapping driver to claim. Specify efi=nosoftreserve to disable this reservation and treat the memory by its base type (for example EFI_CONVENTIONAL_MEMORY / "System RAM"). intel_iommu = [DMAR] Intel IOMMU driver Direct Memory Access Remapping (DMAR). The added options are: nobounce (Default off) - Disable bounce buffer for untrusted devices such as the Thunderbolt devices. This will treat the untrusted devices as the trusted ones. Hence this setting might expose security risks of direct memory access (DMA) attacks. mem = nn[KMG] [KNL,BOOT] This parameter forces the usage of a specific amount of memory. The amount of memory to be used in cases as follows: For test. When the kernel is not able to see the whole system memory. Memory that lies after the mem boundary is excluded from the hypervisor, then assigned to KVM guests. [X86] Work as limiting max address. Use together with the memmap parameter to avoid physical address space collisions. Without memmap , Peripheral Component Interconnect (PCI) devices could be placed at addresses belonging to unused RAM. Note that this setting only takes effect during the boot time since in the case 3 above, the memory may need to be hot added after the boot if the system memory of hypervisor is not sufficient. pci = [PCI] Various Peripheral Component Interconnect (PCI) subsystem options. Some options herein operate on a specific device or a set of devices ( <pci_dev> ). These are specified in one of the following formats: Note that the first format specifies a PCI bus/device/function address which may change if new hardware is inserted, if motherboard firmware changes, or due to changes caused by other kernel parameters. If the domain is left unspecified, it is taken to be zero. Optionally, a path to a device through multiple device/function addresses can be specified after the base address (this is more robust against renumbering issues). The second format selects devices using IDs from the configuration space which may match multiple devices in the system. The options are: hpmmiosize - The fixed amount of bus space which is reserved for hotplug bridge's Memory-mapped I/O (MMIO) window. The default size is 2 megabytes. hpmmioprefsize - The fixed amount of bus space which is reserved for hotplug bridge's MMIO_PREF window. The default size is 2 megabytes. pcie_ports = [PCIE] Peripheral Component Interconnect Express (PCIe) port services handling. The options are: native - Use native PCIe services (PME, AER, DPC, PCIe hotplug) even if the platform does not give the OS permission to use them. This setting may cause conflicts if the platform also tries to use these services. dpc-native - Use native PCIe service for DPC only. This setting may cause conflicts if firmware uses AER or DPC. compat - Disable native PCIe services (PME, AER, DPC, PCIe hotplug). rcu_nocbs = [KNL] The argument is a CPU list. The string "all" can be used to specify every CPU on the system. usbcore.authorized_default = [USB] The default USB device authorization. The options are: -1 (Default) - Authorized except for wireless USB 0 - Not authorized 1 - Authorized 2 - Authorized if the device is connected to the internal port usbcore.old_scheme_first = [USB] This parameter enables to start with the old device initialization scheme. This setting applies only to low and full-speed devices (default 0 = off). usbcore.quirks = [USB] A list of quirk entries to augment the built-in USB core quirk list. The list entries are separated by commas. Each entry has the form VendorID:ProductID:Flags , for example quirks=0781:5580:bk,0a5c:5834:gij . The IDs are 4-digit hex numbers and Flags is a set of letters. Each letter will change the built-in quirk; setting it if it is clear and clearing it if it is set. The added flags: o - USB_QUIRK_HUB_SLOW_RESET , hub needs extra delay after resetting its port New /proc/sys/fs parameters protected_fifos This parameter is based on the restrictions in the Openwall software and provides protection by allowing to avoid unintentional writes to an attacker-controlled FIFO where a program intended to create a regular file. The options are: 0 - Writing to FIFOs is unrestricted. 1 - Does not allow the O_CREAT flag open on FIFOs that we do not own in world writable sticky directories unless they are owned by the owner of the directory. 2 - Applies to group writable sticky directories. protected_regular This parameter is similar to the protected_fifos parameter, however it avoids writes to an attacker-controlled regular file where a program intended to create one. The options are: 0 - Writing to regular files is unrestricted. 1 - Does not allow the O_CREAT flag open on regular files that we do not own in world writable sticky directories unless they are owned by the owner of the directory. 2 - Applies to group writable sticky directories. 5.3. Device Drivers 5.3.1. New drivers Network drivers CAN driver for Kvaser CAN/USB devices (kvaser_usb.ko.xz) Driver for Theobroma Systems UCAN devices (ucan.ko.xz) Pensando Ethernet NIC Driver (ionic.ko.xz) Graphics drivers and miscellaneous drivers Generic Remote Processor Framework (remoteproc.ko.xz) Package Level C-state Idle Injection for Intel(R) CPUs (intel_powerclamp.ko.xz) X86 PKG TEMP Thermal Driver (x86_pkg_temp_thermal.ko.xz) INT3402 Thermal driver (int3402_thermal.ko.xz) ACPI INT3403 thermal driver (int3403_thermal.ko.xz) Intel(R) acpi thermal rel misc dev driver (acpi_thermal_rel.ko.xz) INT3400 Thermal driver (int3400_thermal.ko.xz) Intel(R) INT340x common thermal zone handler (int340x_thermal_zone.ko.xz) Processor Thermal Reporting Device Driver (processor_thermal_device.ko.xz) Intel(R) PCH Thermal driver (intel_pch_thermal.ko.xz) DRM gem ttm helpers (drm_ttm_helper.ko.xz) Device node registration for cec drivers (cec.ko.xz) Fairchild FUSB302 Type-C Chip Driver (fusb302.ko.xz) VHOST IOTLB (vhost_iotlb.ko.xz) vDPA-based vhost backend for virtio (vhost_vdpa.ko.xz) VMware virtual PTP clock driver (ptp_vmw.ko.xz) Intel(R) LPSS PCI driver (intel-lpss-pci.ko.xz) Intel(R) LPSS core driver (intel-lpss.ko.xz) Intel(R) LPSS ACPI driver (intel-lpss-acpi.ko.xz) Mellanox watchdog driver (mlx_wdt.ko.xz) Mellanox FAN driver (mlxreg-fan.ko.xz) Mellanox regmap I/O access driver (mlxreg-io.ko.xz) Intel(R) speed select interface pci mailbox driver (isst_if_mbox_pci.ko.xz) Intel(R) speed select interface mailbox driver (isst_if_mbox_msr.ko.xz) Intel(R) speed select interface mmio driver (isst_if_mmio.ko.xz) Mellanox LED regmap driver (leds-mlxreg.ko.xz) vDPA Device Simulator (vdpa_sim.ko.xz) Intel(R) Tiger Lake PCH pinctrl/GPIO driver (pinctrl-tigerlake.ko.xz) PXA2xx SSP SPI Controller (spi-pxa2xx-platform.ko.xz) CE4100/LPSS PCI-SPI glue code for PXA's driver (spi-pxa2xx-pci.ko.xz) Hyper-V PCI Interface (pci-hyperv-intf.ko.xz) vDPA bus driver for virtio devices (virtio_vdpa.ko.xz) 5.3.2. Updated drivers Network driver updates VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.5.0.0-k. Realtek RTL8152/RTL8153 Based USB Ethernet Adapters (r8152.ko.xz) has been updated to version 1.09.10. Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.10.1. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 4.18.0-240.el8.x86_64. Intel(R) Ethernet Switch Host Interface Driver (fm10k.ko.xz) has been updated to version 0.27.1-k. Intel(R) Ethernet Connection E800 Series Linux Driver (ice.ko.xz) has been updated to version 0.8.2-k. Storage driver updates Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.8.0.1. QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.13. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 34.100.00.00. Driver for HP Smart Array Controller version (hpsa.ko.xz) has been updated to version 3.4.20-170-RH5. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.25.08.3-k. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.714.04.00-rh1. Graphics and miscellaneous driver updates Standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.17.0.0. Crypto Co-processor for Chelsio Terminator cards. (chcr.ko.xz) has been updated to version 1.0.0.0-ko. 5.4. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.3 that have a significant impact on users. 5.4.1. Installer and image creation RHEL 8 initial setup now works properly via SSH Previously, the RHEL 8 initial setup interface did not display when logged in to the system using SSH. As a consequence, it was impossible to perform the initial setup on a RHEL 8 machine managed via SSH. This problem has been fixed, and RHEL 8 initial setup now works correctly when performed via SSH. ( BZ#1676439 ) Installation failed when using the reboot --kexec command Previously, the RHEL 8 installation failed when a Kickstart file that contained the reboot --kexec command was used. With this update, the installation with reboot --kexec now works as expected. ( BZ#1672405 ) America/New York time zone can now be set correctly Previously, the interactive Anaconda installation process did not allow users to set the America/New York time zone when using a kickstart file. With this update, users can now set America/New York as the preferred time zone in the interactive installer if a time zone is not specified in the kickstart file. (BZ#1665428) SELinux contexts are now set correctly Previously, when SELinux was in enforcing mode, incorrect SELinux contexts on some folders and files resulted in unexpected AVC denials when attempting to access these files after installation. With this update, Anaconda sets the correct SELinux contexts. As a result, you can now access the folders and files without manually relabeling the filesystem. ( BZ#1775975 ) Automatic partitioning now creates a valid /boot partition Previously, when installing RHEL on a system using automatic partitioning or using a kickstart file with preconfigured partitions, the installer created a partitioning scheme that could contain an invalid /boot partition. Consequently, the automatic installation process ended prematurely because the verification of the partitioning scheme failed. With this update, Anaconda creates a partitioning scheme that contains a valid /boot partition. As a result, the automatic installation completes as expected. (BZ#1630299) A GUI installation using the Binary DVD ISO image now completes successfully without CDN registration Previously, when performing a GUI installation using the Binary DVD ISO image file, a race condition in the installer prevented the installation from proceeding until you registered the system using the Connect to Red Hat feature. With this update, you can now proceed with the installation without registering the system using the Connect to Red Hat feature. (BZ#1823578) iSCSI or FCoE devices created in Kickstart and used in ignoredisk --only-use command no longer stop the installation process Previously, when the iSCSI or FCoE devices created in Kickstart were used in the ignoredisk --only-use command, the installation program failed with an error similar to Disk "disk/by-id/scsi-360a9800042566643352b476d674a774a" given in ignoredisk command does not exist . This stopped the installation process. With this update, the problem has been fixed. The installation program continues working. (BZ#1644662) System registration using CDN failed with the error message Name or service not known When you attempted to register a system using the Content Delivery Network (CDN), the registration process failed with the error message Name or service not known . This issue occurred because the empty Custom server URL and Custom Base URL values overwrote the default values for system registration. With this update, the empty values now do not overwrite the default values, and the system registration completes successfully. ( BZ#1862116 ) 5.4.2. Software management dnf-automatic now updates only packages with correct GPG signatures Previously, the dnf-automatic configuration file did not check GPG signatures of downloaded packages before performing an update. As a consequence, unsigned updates or updates signed by key which was not imported could be installed by dnf-automatic even though repository configuration requires GPG signature check ( gpgcheck=1 ). With this update, the problem has been fixed, and dnf-automatic checks GPG signatures of downloaded packages before performing the update. As a result, only updates with correct GPG signatures are installed from repositories that require GPG signature check. ( BZ#1793298 ) Trailing comma no longer causes entries removal in an append type option Previously, adding a trailing comma (an empty entry at the end of the list) to an append type option (for example, exclude , excludepkgs , includepkgs ) caused all entries in the option to be removed. Also, adding two commas (an empty entry) caused that only entries after the commas were used. With this update, empty entries other than leading commas (an empty entry at the beginning of the list) are ignored. As a result, only the leading comma now removes existing entries from the append type option, and the user can use it to overwrite these entries. ( BZ#1788154 ) 5.4.3. Shells and command-line tools The ReaR disk layout no longer includes entries for Rancher 2 Longhorn iSCSI devices and file systems This update removes entries for Rancher 2 Longhorn iSCSI devices and file systems from the disk layout created by ReaR . ( BZ#1843809 ) Rescue image creation with a file larger than 4 GB is now enabled on IBM POWER, little endian Previously, the ReaR utility could not create rescue images containing files larger than 4GB on IBM POWER, little endian architecture. With this update, the problem has been fixed, and it is now possible to create a rescue image with a file larger than 4 GB on IBM POWER, little endian. ( BZ#1729502 ) 5.4.4. Security SELinux no longer prevents systemd-journal-gatewayd to call newfstatat() on /dev/shm/ files used by corosync Previously, SELinux policy did not contain a rule that allows the systemd-journal-gatewayd daemon to access files created by the corosync service. As a consequence, SELinux denied systemd-journal-gatewayd to call the newfstatat() function on shared memory files created by corosync . With this update, SELinux no longer prevents systemd-journal-gatewayd to call newfstatat() on shared memory files created by corosync . (BZ#1746398) Libreswan now works with seccomp=enabled on all configurations Prior to this update, the set of allowed syscalls in the Libreswan SECCOMP support implementation did not match new usage of RHEL libraries. Consequently, when SECCOMP was enabled in the ipsec.conf file, the syscall filtering rejected even syscalls required for the proper functioning of the pluto daemon; the daemon was killed, and the ipsec service was restarted. With this update, all newly required syscalls have been allowed, and Libreswan now works with the seccomp=enabled option correctly. ( BZ#1544463 ) SELinux no longer prevents auditd to halt or power off the system Previously, the SELinux policy did not contain a rule that allows the Audit daemon to start a power_unit_file_t systemd unit. Consequently, auditd could not halt or power off the system even when configured to do so in cases such as no space left on a logging disk partition. This update of the selinux-policy packages adds the missing rule, and auditd can now properly halt and power off the system only with SELinux in enforcing mode. ( BZ#1826788 ) IPTABLES_SAVE_ON_STOP now works correctly Previously, the IPTABLES_SAVE_ON_STOP feature of the iptables service did not work because files with saved IP tables content received incorrect SELinux context. This prevented the iptables script from changing permissions, and the script subsequently failed to save the changes. This update defines a proper context for the iptables.save and ip6tables.save files, and creates a filename transition rule. As a consequence, the IPTABLES_SAVE_ON_STOP feature of the iptables service works correctly. ( BZ#1776873 ) NSCD databases can now use different modes Domains in the nsswitch_domain attribute are allowed access to Name Service Cache Daemon (NSCD) services. Each NSCD database is configured in the nscd.conf file, and the shared property determines whether the database uses Shared memory or Socket mode. Previously, all NSCD databases had to use the same access mode, depending on the nscd_use_shm boolean value. Now, using Unix stream socket is always allowed, and therefore different NSCD databases can use different modes. ( BZ#1772852 ) The oscap-ssh utility now works correctly when scanning a remote system with --sudo When performing a Security Content Automation Protocol (SCAP) scan of a remote system using the oscap-ssh tool with the --sudo option, the oscap tool on the remote system saves scan result files and report files into a temporary directory as the root user. Previously, if the umask settings on the remote machine were changed, oscap-ssh might have been prevented access to these files. This update fixes the issue, and as a result, oscap saves the files as the target user, and oscap-ssh accesses the files normally. ( BZ#1803116 ) OpenSCAP now handles remote file systems correctly Previously, OpenSCAP did not reliably detect remote file systems if their mount specification did not start with two slashes. As a consequence, OpenSCAP handled some network-based file systems as local. With this update, OpenSCAP identifies file systems using the file-system type instead of the mount specification. As a result, OpenSCAP now handles remote file systems correctly. ( BZ#1870087 ) OpenSCAP no longer removes blank lines from YAML multi-line strings Previously, OpenSCAP removed blank lines from YAML multi-line strings within generated Ansible remediations from a datastream. This affected Ansible remediations and caused the openscap utility to fail the corresponding Open Vulnerability and Assessment Language (OVAL) checks, producing false positive results. The issue is now fixed and as a result, openscap no longer removes blank lines from YAML multi-line strings. ( BZ#1795563 ) OpenSCAP can now scan systems with large numbers of files without running out of memory Previously, when scanning systems with low RAM and large numbers of files, the OpenSCAP scanner sometimes caused the system to run out of memory. With this update, OpenSCAP scanner memory management has been improved. As a result, the scanner no longer runs out of memory on systems with low RAM when scanning large numbers of files, for example package groups Server with GUI and Workstation . ( BZ#1824152 ) config.enabled now controls statements correctly Previously, the rsyslog incorrectly evaluated the config.enabled directive during the configuration processing of a statement. As a consequence, the parameter not known errors were displayed for each statement except for the include() one. With this update, the configuration is processed for all statements equally. As a result, config.enabled now correctly disables or enables statements without displaying any error. (BZ#1659383) fapolicyd no longer prevents RHEL updates When an update replaces the binary of a running application, the kernel modifies the application binary path in memory by appending the " (deleted)" suffix. Previously, the fapolicyd file access policy daemon treated such applications as untrusted, and prevented them from opening and executing any other files. As a consequence, the system was sometimes unable to boot after applying updates. With the release of the RHBA-2020:5242 advisory, fapolicyd ignores the suffix in the binary path so the binary can match the trust database. As a result, fapolicyd enforces the rules correctly and the update process can finish. ( BZ#1897090 ) The e8 profile can now be used to remediate RHEL 8 systems with Server with GUI Using the OpenSCAP Anaconda Add-on to harden the system on the Server With GUI package group with profiles that select rules from the Verify Integrity with RPM group no longer requires an extreme amount of RAM on the system. The cause of this problem was the OpenSCAP scanner. For more details, see Scanning large numbers of files with OpenSCAP causes systems to run out of memory . As a consequence, the hardening of the system using the RHEL 8 Essential Eight (e8) profile now works also with Server With GUI . (BZ#1816199) 5.4.5. Networking Automatic loading of iptables extension modules by the nft_compat module no longer hangs Previously, when the nft_compat module loaded an extension module while an operation on network name spaces ( netns ) happened in parallel, a lock collision could occur if that extension registered a pernet subsystem during initialization. As a consequence, the kernel-called modprobe command hang. This could also be caused by other services, such as libvirtd , that also execute iptables commands. This problem has been fixed. As a result, loading iptables extension modules by the nft_compat module no longer hangs. (BZ#1757933) The firewalld service now removes ipsets when the service stops Previously, stopping the firewalld service did not remove ipsets . This update fixes the problem. As a result, ipsets are no longer left in the system after firewalld stops. ( BZ#1790948 ) firewalld no longer retains ipset entries after shutdown Previously, shutting down firewalld did not remove ipset entries. Consequently, ipset entries remained active in the kernel even after stopping the firewalld service. With this fix, shutting down firewalld removes ipset entries as expected. ( BZ#1682913 ) firewalld now restores ipset entries after reloading Previously, firewalld did not retain runtime ipset entries after reloading. Consequently, users had to manually add the missing entries again. With this update, firewalld has been modified to restore ipset entries after reloading. ( BZ#1809225 ) nftables and firewalld services are now mutually exclusive Previously, it was possible to enable nftables and firewalld services at the same time. As a consequence, nftables was overriding firewalld rulesets. With this update, nftables and firewalld services are now mutually exclusive so that these cannot be enabled at the same time. ( BZ#1817205 ) 5.4.6. Kernel The huge_page_setup_helper.py script now works correctly A patch that updated the huge_page_setup_helper.py script for Python 3 was accidentally removed. Consequently, after executing huge_page_setup_helper.py , the following error message appeared: With this update, the problem has been fixed by updating the libhugetlbfs.spec file. As a result, huge_page_setup_helper.py does not show any error in the described scenario. (BZ#1823398) Systems with a large amount of persistent memory boot more quickly and without timeouts Systems with a large amount of persistent memory took a long time to boot because the original source code allowed for just one initialization thread per node. For example, for a 4-node system there were 4 memory initialization threads. Consequently, if there were persistent memory file systems listed in the /etc/fstab file, the system could time out while waiting for devices to become available. With this update, the problem has been fixed because the source code now allows for multiple memory initialization threads within a single node. As a result, the systems boot more quickly and no timeouts appear in the described scenario. (BZ#1666538) The bcc scripts now successfully compile a BPF module During the script code compilation to create a Berkeley Packet Filter (BPF) module, the bcc toolkit used kernel headers for data type definition. Some kernel headers needed the KBUILD_MODNAME macro to be defined. Consequently, those bcc scripts that did not add KBUILD_MODNAME , were likely to fail to compile a BPF module across various CPU architectures. The following bcc scripts were affected: bindsnoop sofdsnoop solisten tcpaccept tcpconnect tcpconnlat tcpdrop tcpretrans tcpsubnet tcptop tcptracer With this update, the problem has been fixed by adding KBUILD_MODNAME to the default cflags parameter for bcc . As a result, this problem no longer appears in the described scenario. Also, customer scripts do not need to define KBUILD_MODNAME themselves either. (BZ#1837906) bcc-tools and bpftrace work properly on IBM Z Previously, a feature backport introduced the ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE kernel option. However, the bcc-tools package and bpftrace tracing language package for IBM Z architectures did not have proper support for this option. Consequently, the bpf() system call failed with the Invalid argument exception and bpftrace failed with an error stating Error loading program when trying to load the BPF program. With this update, the ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE option is now removed. As a result, the problem no longer appears in the described scenario. (BZ#1847837, BZ#1853964) Boot process no longer fails due to lack of entropy Previously, the boot process failed due to lack of entropy. A better mechanism is now used to allow the kernel to gather entropy early in the boot process, which does not depend on any hardware specific interrupts. This update fixes the problem by ensuring availability of sufficient entropy to secure random generation in early boot. As a result, the fix prevents kickstart timeout or slow boots and the boot process works as expected. ( BZ#1778762 ) Repeated reboots using kexec now work as expected Previously, during the kernel reboot on the Amazon EC2 Nitro platform, the remove module ( rmmod ) was not called during the shutdown() call of the kernel execution path. Consequently, repeated kernel reboots using the kexec system call led to a failure. With this update, the issue has been fixed by adding the PCI shutdown() handler that allows safe kernel execution. As a result, repeated reboots using kexec on Amazon EC2 Nitro platforms no longer fail. (BZ#1758323) Repeated reboots using vPMEM memory as dump target now works as expected Previously, using Virtual Persistent Memory (vPMEM) namespaces as dump target for kdump or fadump caused the papr_scm module to unmap and remap the memory backed by vPMEM and re-add the memory to its linear map. Consequently, this behavior triggered Hypervisor Calls (HCalls) to POWER Hypervisor. As a result, this slows down the capture kernel boot considerably and takes a long time to save the dump file. This update fixes the problem and the boot process now works as expected in the described scenario (BZ#1792125) Attempting to add ICE driver NIC port to a mode 5 bonding master interface no longer fails Previously, attempting to add the ICE driver NIC port to a mode 5 ( balance-tlb ) bonding master interface led to a failure with an error Master 'bond0', Slave 'ens1f0': Error: Enslave failed . Consequently, you experienced an intermittent failure to add the NIC port to the bonding master interface. This update fixes the issue and adding the interface no longer fails. (BZ#1791664) The cxgb4 driver no longer causes crash in the kdump kernel Previously, the kdump kernel would crash while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevented the kdump kernel from saving a core for later analysis. To work around this problem, add the novmcoredd parameter to the kdump kernel command line to allow saving core files. With the release of the RHSA-2020:1769 advisory, the kdump kernel handles this situation properly and no longer crashes. (BZ#1708456) 5.4.7. High availability and clusters When a GFS2 file system is used with the Filesystem agent the fast_stop option now defaults to no Previously, when a GFS2 file system was used with the Filesystem agent, the fast_stop option defaulted to yes . This value could result in unnecessary fence events due to the length of time it can take a GFS2 file system to unmount. With this update, this option defaults to no . For all other file systems it continues to default to yes . (BZ#1814896) fence_compute and fence_evacuate agents now interpret insecure option in a more standard way Previously, the fence_compute and fence_evacuate agents worked as if --insecure was specified by default. With this update, customers who do not use valid certificates for their compute or evacuate services must set insecure=true and use the --insecure option when running manually from the CLI. This is consistent with the behavior of all other agents. ( BZ#1830776 ) 5.4.8. Dynamic programming languages, web and database servers Optimized CPU consumption by libdb A update to the libdb database caused an excessive CPU consumption in the trickle thread. With this update, the CPU usage has been optimized. ( BZ#1670768 ) The did_you_mean Ruby gem no longer contains a file with a non-commercial license Previously, the did_you_mean gem available in the ruby:2.5 module stream contained a file with a non-commercial license. This update removes the affected file. ( BZ#1846113 ) nginx can now load server certificates from hardware security tokens through the PKCS#11 URI The ssl_certificate directive of the nginx web server supports loading TLS server certificates from hardware security tokens directly from PKCS#11 modules. Previously, it was impossible to load server certificates from hardware security tokens through the PKCS#11 URI. ( BZ#1668717 ) 5.4.9. Compilers and development tools The glibc dynamic loader no longer fails while loading a shared library that uses DT_FILTER and has a constructor Prior to this update, a defect in the dynamic loader implementation of shared objects as filters caused the dynamic loader to fail while loading a shared library that uses a filter and has a constructor. With this release, the dynamic loader implementation of filters ( DT_FILTER ) has been fixed to correctly handle such shared libraries. As a result, the dynamic loader now works as expected in the mentioned scenario. ( BZ#1812756 ) glibc can now remove pseudo-mounts from the getmntent() list The kernel includes automount pseudo-entries in the tables exposed to userspace. Consequently, programs that use the getmntent() API see both regular mounts and these pseudo-mounts in the list. The pseudo-mounts do not correspond to real mounts, nor include valid information. With this update, if the mount entry has the ignore mount option present in the automount(8) configuration the glibc library now removes these pseudo-mounts from the getmntent() list. Programs that expect the behavior have to use a different API. (BZ#1743445) The movv1qi pattern no longer causes miscompilation in the auto-vectorized code on IBM Z Prior to this update, wrong load instructions were emitted for the movv1qi pattern. As a consequence, when auto-vectorization was in effect, a miscompilation could occur on IBM Z systems. This update fixes the movv1qi pattern, and as a result, code compiles and runs correctly now. (BZ#1784758) PAPI_event_name_to_code() now works correctly in multiple threads Prior to this update, the PAPI internal code did not handle thread coordination properly. As a consequence, when multiple threads used the PAPI_event_name_to_code() operation, a race condition occurred and the operation failed. This update enhances the handling of multiple threads in the PAPI internal code. As a result, multithreaded code using the PAPI_event_name_to_code() operation now works correctly. (BZ#1807346) Improved performance for the glibc math functions on IBM Power Systems Previously, the glibc math functions performed unnecessary floating point status updates and system calls on IBM Power Systems, which negatively affected the performance. This update removes the unnecessary floating point status update, and improves the implementations of: ceil() , ceilf() , fegetmode() , fesetmode() , fesetenv() , fegetexcept() , feenableexcept() , fedisablexcept() , fegetround() and fesetround() . As a result, the performance of the math library is improved on IBM Power Systems. (BZ#1783303) Memory protection keys are now supported on IBM Power On IBM Power Systems, the memory protection key interfaces pkey_set and pkey_get were previously stub functions, and consequently always failed. This update implements the interfaces, and as a result, the GNU C Library ( glibc ) now supports memory protection keys on IBM Power Systems. Note that memory protection keys currently require the hash-based memory management unit (MMU), therefore you might have to boot certain systems with the disable_radix kernel parameter. (BZ#1642150) papi-testsuite and papi-devel now install the required papi-libs package Previously, the papi-testsuite and papi-devel RPM packages did not declare a dependency on the matching papi-libs package. Consequently, the tests failed to run, and developers did not have the required version of the papi shared library available for their applications. With this update, when the user installs either the papi-testsuite or papi-devel packages, the papi-libs package is also installed. As a result, the papi-testsuite now has the correct library allowing the tests to run, and developers using papi-devel have their executables linked with the appropriate version of the papi shared library. ( BZ#1664056 ) Installing the lldb packages for multiple architectures no longer leads to file conflicts Previously, the lldb packages installed architecture-dependent files in architecture-independent locations. As a consequence, installing both 32-bit and 64-bit versions of the packages led to file conflicts. This update packages the files in correct architecture-dependent locations. As a result, the installation of lldb in the described scenario completes successfully. (BZ#1841073) getaddrinfo now correctly handles a memory allocation failure Previously, after a memory allocation failure, the getaddrinfo function of the GNU C Library glibc did not release the internal resolver context. As a consequence, getaddrinfo was not able to reload the /etc/resolv.conf file for the rest of the lifetime of the calling thread, resulting in a possible memory leak. This update modifies the error handling path with an additional release operation for the resolver context. As a result, getaddrinfo reloads /etc/resolv.conf with new configuration values even after an intermittent memory allocation failure. ( BZ#1810146 ) glibc avoids certain failures caused by IFUNC resolver ordering Previously, the implementation of the librt and libpthread libraries of the GNU C Library glibc contained the indirect function (IFUNC) resolvers for the following functions: clock_gettime , clock_getcpuclockid , clock_nanosleep , clock_settime , vfork . In some cases, the IFUNC resolvers could execute before the librt and libpthread libraries were relocated. Consequently, applications would fail in the glibc dynamic loader during early program startup. With this release, the implementations of these functions have been moved into the libc component of glibc , which prevents the described problem from occurring. ( BZ#1748197 ) Assertion failures no longer occur during pthread_create Previously, the glibc dynamic loader did not roll back changes to the internal Thread Local Storage (TLS) module ID counter. As a consequence, an assertion failure in the pthread_create function could occur after the dlopen function had failed in certain ways. With this fix, the glibc dynamic loader updates the TLS module ID counter at a later point in time, after certain failures can no longer happen. As a result, the assertion failures no longer occur. ( BZ#1774115 ) glibc now installs correct dependencies for 32-bit applications using nss_db Previously, the nss_db.x86_64 package did not declare dependencies on the nss_db.i686 package. Therefore automated installation did not install nss_db.i686 on the system, despite having a 32-bit environment glibc.i686 installed. As a consequence, 32-bit applications using nss_db failed to perform accurate user database lookups, while 64-bit applications in the same setup worked correctly. With this update, the glibc packages now have weak dependencies that trigger the installation of the nss_db.i686 package when both glibc.i686 and nss_db are installed on the system. As a result, 32-bit applications using nss_db now work correctly, even if the system administrator has not explicitly installed the nss_db.i686 package. ( BZ#1807824 ) glibc locale information updated with Odia language The name of Indian state previously known as Orissa has changed to Odisha, and the name of its official language has changed from Oriya to Odia. With this update, the glibc locale information reflects the new name of the language. ( BZ#1757354 ) LLVM sub packages now install arch-dependent files in arch-dependent locations Previously, LLVM sub packages installed arch-dependent files in arch-independent locations. This resulted in conflicts when installing 32 and 64 bit versions of LLVM. With this update, package files are now correctly installed in arch-dependent locations, avoiding version conflicts. (BZ#1820319) Password and group lookups no longer fail in glibc Previously, the nss_compat module of the glibc library overwrote the errno status with incorrect error codes during processing of password and group entries. Consequently, applications did not resize buffers as expected, causing password and group lookups to fail. This update fixes the problem, and the lookups now complete as expected. ( BZ#1836867 ) 5.4.10. Identity Management SSSD no longer downloads every rule with a wildcard character by default Previously, the ldap_sudo_include_regexp option was incorrectly set to true by default. As a consequence, when SSSD started running or after updating SSSD rules, SSSD downloaded every rule that contained a wildcard character ( * ) in the sudoHost attribute. This update fixes the bug, and the ldap_sudo_include_regexp option is now properly set to false by default. As a result, the described problem no longer occurs. ( BZ#1827615 ) krb5 now only requests permitted encryption types Previously, permitted encryption types specified in the permitted_enctypes variable in the /etc/krb5.conf file did not apply to the default encryption types if the default_tgs_enctypes or default_tkt_enctypes attributes were not set. Consequently, Kerberos clients were able to request deprecated cipher suites like RC4, which may cause other processes to fail. With this update, encryption types specified in the permitted_enctypes variable apply to the default encryption types as well, and only permitted encryption types are requested. The RC4 cipher suite, which has been deprecated in RHEL 8, is the default encryption type for users, services, and trusts between Active Directory (AD) domains in an AD forest. To ensure support for strong AES encryption types between AD domains in an AD forest, see the AD DS: Security: Kerberos "Unsupported etype" error when accessing a resource in a trusted domain Microsoft article. To enable support for the deprecated RC4 encryption type in an IdM server for backwards compatibility with AD, use the update-crypto-policies --set DEFAULT:AD-SUPPORT command. (BZ#1791062) KDCs now correctly enforce password lifetime policy from LDAP backends Previously, non-IPA Kerberos Distribution Centers (KDCs) did not ensure maximum password lifetimes because the Kerberos LDAP backend incorrectly enforced password policies. With this update, the Kerberos LDAP backend has been fixed, and password lifetimes behave as expected. ( BZ#1784655 ) Password expiration notifications sent to AD clients using SSSD Previously, Active Directory clients (non-IdM) using SSSD were not sent password expiration notices because of a recent change in the SSSD interface for acquiring Kerberos credentials. The Kerberos interface has been updated and expiration notices are now sent correctly. ( BZ#1820311 ) Directory Server no longer leaks memory when using indirect COS definitions Previously, after processing an indirect Class Of Service (COS) definition, Directory Server leaked memory for each search operation that used an indirect COS definition. With this update, Directory Server frees all internal COS structures associated with the database entry after it has been processed. As a result, the server no longer leaks memory when using indirect COS definitions. ( BZ#1816862 ) Adding ID overrides of AD users now works in IdM Web UI Previously, adding ID overrides of Active Directory (AD) users to Identity Management (IdM) groups in the Default Trust View for the purpose of granting access to management roles failed when using the IdM Web UI. This update fixes the bug. As a result, you can now use both the Web UI as well as the IdM command-line interface (CLI) in this scenario. ( BZ#1651577 ) FreeRADIUS no longer generates certificates during package installation Previously, FreeRADIUS generated certificates during package installation, resulting in the following issues: If FreeRADIUS was installed using Kickstart, certificates might be generated at a time when entropy on the system was insufficient, resulting in either a failed installation or a less secure certificate. The package was difficult to build as part of an image, such as a container, because the package installation occurs on the builder machine instead of the target machine. All instances that are spawned from the image had the same certificate information. It was difficult for an end-user to generate a simple VM in their environment as the certificates would have to be removed and regenerated manually. With this update, the FreeRADIUS installation no longer generates default self-signed CA certificates nor subordinate CA certificates. When FreeRADIUS is launched via systemd : If all of the required certificates are missing, a set of default certificates are generated. If one or more of the expected certificates are present, it does not generate new certificates. ( BZ#1672285 ) FreeRADIUS now generates FIPS-compliant Diffie-Hellman parameters Due to new FIPS requirements that do not allow openssl to generate Diffie-Hellman (dh) parameters via dhparam , the dh parameter generation has been removed from the FreeRADIUS bootstrap scripts and the file, rfc3526-group-18-8192.dhparam , is included with the FreeRADIUS packages for all systems, and thus enables FreeRADIUS to start in FIPS mode. Note that you can customize /etc/raddb/certs/bootstrap and /etc/raddb/certs/Makefile to restore the DH parameter generation if required. ( BZ#1859527 ) Updating Healthcheck now properly updates both ipa-healthcheck-core and ipa-healthcheck Previously, entering yum update healthcheck did not update the ipa-healthcheck package but replaced it with the ipa-healthcheck-core package. As a consequence, the ipa-healthcheck command did not work after the update. This update fixes the bug, and updating ipa-healthcheck now correctly updates both the ipa-healthcheck package and the ipa-healthcheck-core package. As a result, the Healthcheck tool works correctly after the update. ( BZ#1852244 ) 5.4.11. Graphics infrastructures Laptops with hybrid Nvidia GPUs can now successfully resume from suspend Previously, the nouveau graphics driver sometimes could not power on hybrid Nvidia GPUs on certain laptops from power-save mode. As a result, the laptops failed to resume from suspend. With this update, several problems in the Runtime Power Management ( runpm ) system have been fixed. As a result, the laptops with hybrid graphics can now successfully resume from suspend. (JIRA:RHELPLAN-57572) 5.4.12. Virtualization Migrating virtual machines with the default CPU model now works more reliably Previously, if a virtual machine (VM) was created without a specific CPU model, QEMU used a default model that was not visible to the libvirt service. As a consequence, it was possible to migrate the VM to a host that did not support the default CPU model of the VM, which sometimes caused crashes and incorrect behavior in the guest OS after the migration. With this update, libvirt explicitly uses the qemu64 model as default in the XML configuration of the VM. As a result, if the user attempts migrating a VM with the default CPU model to a host that does not support that model, libvirt correctly generates an error message. Note, however, that Red Hat strongly recommends using a specific CPU model for your VMs. (JIRA:RHELPLAN-45906) 5.4.13. Containers Notes on FIPS support with Podman The Federal Information Processing Standard (FIPS) requires certified modules to be used. Previously, Podman correctly installed certified modules in containers by enabling the proper flags at startup. However, in this release, Podman does not properly set up the additional application helpers normally provided by the system in the form of the FIPS system-wide crypto-policy. Although setting the system-wide crypto-policy is not required by the certified modules it does improve the ability of applications to use crypto modules in compliant ways. To work around this problem, change your container to run the update-crypto-policies --set FIPS command before any other application code was executed. The update-crypto-policies --set FIPS command is no longer required with this fix. ( BZ#1804193 ) 5.5. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.3. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 5.5.1. Networking Enabled the xt_u32 Netfilter module The xt_u32 Netfilter module is now available in the kernel-modules-extra rpm. This module helps in packet forwarding based on the data that is inaccessible to other protocol-based packet filters and thus eases manual migration to nftables . However, xt_u32 Netfilter module is not supported by Red Hat. (BZ#1834769) nmstate available as a Technology Preview Nmstate is a network API for hosts. The nmstate packages, available as a Technology Preview, provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the examples in the /usr/share/doc/nmstate/examples directory. (BZ#1674456) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) XDP available as a Technology Preview The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation. (BZ#1503672) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP_TX and XDP_REDIRECT return codes. The XDP hardware offloading. Before using this feature, see Unloading XDP programs on Netronome network cards that use the nfp driver fails . ( BZ#1889737 ) act_mpls module available as a Technology Preview The act_mpls module is now available in the kernel-modules-extra rpm as a Technology Preview. The module allows the application of Multiprotocol Label Switching (MPLS) actions with Traffic Control (TC) filters, for example, push and pop MPLS label stack entries with TC filters. The module also allows the Label, Traffic Class, Bottom of Stack, and Time to Live fields to be set independently. (BZ#1839311) Multipath TCP is now available as a Technology Preview Multipath TCP (MPTCP), an extension to TCP, is now available as a Technology Preview. MPTCP improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server. Note that either the applications running on the server must natively support MPTCP or administrators must load an eBPF program into the kernel to dynamically change IPPROTO_TCP to IPPROTO_MPTCP . For further details see, Getting started with Multipath TCP . (JIRA:RHELPLAN-41549) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. (BZ#1906489) 5.5.2. Kernel The kexec fast reboot feature is available as Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. kexec fast reboot significantly speeds the boot process by allowing the kernel to boot directly into the second kernel without passing through the Basic Input/Output System (BIOS) first. To use this feature: Load the kexec kernel manually. Reboot the operating system. ( BZ#1769727 ) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf (2) man page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: bpftrace , a high-level tracing language that utilizes the eBPF virtual machine. AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. (BZ#1559616) The igc driver available as a Technology Preview for RHEL 8 The igc Intel 2.5G Ethernet Linux wired LAN driver is now available on all architectures for RHEL 8 as a Technology Preview. The ethtool utility also supports igc wired LANs. (BZ#1495358) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 5.5.3. File systems and storage NVMe/TCP is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme-tcp.ko and nvmet-tcp.ko kernel modules have been added as a Technology Preview. The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the nvme-cli and nvmetcli packages. The NVMe/TCP target Technology Preview is included only for testing purposes and is not currently planned for full support. (BZ#1696451) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . RHEL 8.3 updates Stratis to version 2.1.0. For more information, see Stratis 2.1.0 Release Notes . (JIRA:RHELPLAN-1212) IdM now supports setting up a Samba server on an IdM domain member as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) 5.5.4. High availability and clusters Local mode version of pcs cluster setup command available as a technology preview By default, the pcs cluster setup command automatically synchronizes all configuration files to the cluster nodes. In Red Hat Enterprise Linux 8.3, the pcs cluster setup command provides the --corosync-conf option as a technology preview. Specifying this option switches the command to local mode. In this mode, pcs creates a corosync.conf file and saves it to a specified file on the local node only, without communicating with any other node. This allows you to create a corosync.conf file in a script and handle that file by means of the script. ( BZ#1839637 ) Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on the podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) 5.5.5. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) 5.5.6. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session. As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer ( baobab ), Firewall Configuration ( firewall-config ), Red Hat Subscription Manager ( subscription-manager ), or the Firefox web browser. Using Firefox , administrators can connect to the local Cockpit daemon remotely. (JIRA:RHELPLAN-27394, BZ#1667225, BZ#1667516, BZ#1724302) GNOME desktop on IBM Z is available as a Technology Preview The GNOME desktop, including the Firefox web browser, is now available as a Technology Preview on the IBM Z architecture. You can now connect to a remote graphical session running GNOME using VNC to configure and manage your IBM Z servers. (JIRA:RHELPLAN-27737) 5.5.7. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) Intel Tiger Lake graphics available as a Technology Preview Intel Tiger Lake UP3 and UP4 Xe graphics are now available as a Technology Preview. To enable hardware acceleration with Intel Tiger Lake graphics, add the following option on the kernel command line: In this option, replace pci-id with one of the following: The PCI ID of your Intel GPU The * character to enable the i915 driver with all alpha-quality hardware (BZ#1783396) 5.5.8. Red Hat Enterprise Linux system roles The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL system roles . ( BZ#1812552 ) 5.5.9. Virtualization KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 509 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working. (BZ#1528684) Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on AMD64 and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. Note that in RHEL 8.2 and later, nested virtualization is fully supported for VMs running on an Intel 64 host. (JIRA:RHELPLAN-14047, JIRA:RHELPLAN-24437) Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508) 5.5.10. Containers podman container image is available as a Technology Preview The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool is used for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman is based on the libpod library for container lifecycle management. The libpod library provides APIs for managing containers, pods, container images, and volumes. This container image allows create, modify and run container images without the need to install the podman package on your system. The use-case does not cover running this image in rootless mode as a non-root user. To pull the registry.redhat.io/rhel8/podman container image, you need an active Red Hat Enterprise Linux subscription. ( BZ#1627899 ) crun is available as a Technology Preview The crun OCI runtime has been added to the container-rools:rhl8 module. The crun provides an access to run with cgoupsV2. The crun supports an annotation that allows the container to access the rootless users additional groups. This is useful for volume mounting in a directory that the user only have group access to, or the directory is setgid on it. (BZ#1841438) The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) 5.6. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 5.6.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs. auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) lorax-composer back end for Image Builder is deprecated in RHEL 8 The back end lorax-composer for Image Builder is considered deprecated. It will only receive select fixes for the rest of the Red Hat Enterprise Linux 8 life cycle and will be omitted from future major releases. Red Hat recommends that you uninstall lorax-composer the and install osbuild-composer back end instead. See Composing a customized RHEL system image for more details. ( BZ#1893767 ) 5.6.2. Software management rpmbuild --sign is deprecated With this update, the rpmbuild --sign command has become deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. ( BZ#1688849 ) 5.6.3. Shells and command-line tools Metalink support for curl has been disabled. A flaw was found in curl functionality in the way it handles credentials and file hash mismatch for content downloaded using the Metalink. This flaw allows malicious actors controlling a hosting server to: Trick users into downloading malicious content Gain unauthorized access to provided credentials without the user's knowledge The highest threat from this vulnerability is confidentiality and integrity. To avoid this, the Metalink support for curl has been disabled from Red Hat Enterprise Linux 8.2.0.z. As a workaround, execute the following command, after the Metalink file is downloaded: For example: (BZ#1999620) 5.6.4. Infrastructure services mailman is deprecated With this update, the mailman packages have been marked as deprecated and will not be available in the future major releases of Red Hat Enterprise Linux. (BZ#1890976) 5.6.5. Security NSS SEED ciphers are deprecated The Mozilla Network Security Services ( NSS ) library will not support TLS cipher suites that use a SEED cipher in a future release. To ensure smooth transition of deployments that rely on SEED ciphers when NSS removes support, Red Hat recommends enabling support for other cipher suites. Note that SEED ciphers are already disabled by default in RHEL. ( BZ#1817533 ) TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard version was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. (BZ#1657927) 5.6.6. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) 5.6.7. Kernel Installing RHEL for Real Time 8 using diskless boot is now deprecated Diskless booting allows multiple systems to share a root file system via the network. While convenient, diskless boot is prone to introducing network latency in realtime workloads. With a future minor update of RHEL for Real Time 8, the diskless booting feature will no longer be supported. ( BZ#1748980 ) The qla3xxx driver is deprecated The qla3xxx driver has been deprecated in RHEL 8. The driver will likely not be supported in future major releases of this product, and thus it is not recommended for new deployments. (BZ#1658840) The dl2k , dnet , ethoc , and dlci drivers are deprecated The dl2k , dnet , ethoc , and dlci drivers have been deprecated in RHEL 8. The drivers will likely not be supported in future major releases of this product, and thus they are not recommended for new deployments. (BZ#1660627) The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 5.6.8. File systems and storage The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . (BZ#1665295) LVM mirror is deprecated The LVM mirror segment type is now deprecated. Support for mirror will be removed in a future major release of RHEL. Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror . The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 logical volume . LVM mirror has several known issues. For details, see known issues in file systems and storage . (BZ#1827628) peripety is deprecated The peripety package is deprecated since RHEL 8.3. The Peripety storage event notification daemon parses system storage logs into structured storage events. It helps you investigate storage issues. ( BZ#1871953 ) NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) cramfs has been deprecated Due to lack of users, the cramfs kernel module is deprecated. squashfs is recommended as an alternative solution. (BZ#1794513) 5.6.9. Identity Management openssh-ldap has been deprecated The openssh-ldap subpackage has been deprecated in Red Hat Enterprise Linux 8 and will be removed in RHEL 9. As the openssh-ldap subpackage is not maintained upstream, Red Hat recommends using SSSD and the sss_ssh_authorizedkeys helper, which integrate better with other IdM solutions and are more secure. By default, the SSSD ldap and ipa providers read the sshPublicKey LDAP attribute of the user object, if available. Note that you cannot use the default SSSD configuration for the ad provider or IdM trusted domains to retrieve SSH public keys from Active Directory (AD), since AD does not have a default LDAP attribute to store a public key. To allow the sss_ssh_authorizedkeys helper to get the key from SSSD, enable the ssh responder by adding ssh to the services option in the sssd.conf file. See the sssd.conf(5) man page for details. To allow sshd to use sss_ssh_authorizedkeys , add the AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys and AuthorizedKeysCommandUser nobody options to the /etc/ssh/sshd_config file as described by the sss_ssh_authorizedkeys(1) man page. ( BZ#1871025 ) DES and 3DES encryption types have been removed Due to security reasons, the Data Encryption Standard (DES) algorithm has been deprecated and disabled by default since RHEL 7. With the recent rebase of Kerberos packages, single-DES (DES) and triple-DES (3DES) encryption types have been removed from RHEL 8. If you have configured services or users to only use DES or 3DES encryption, you might experience service interruptions such as: Kerberos authentication errors unknown enctype encryption errors Kerberos Distribution Centers (KDCs) with DES-encrypted Database Master Keys ( K/M ) fail to start Perform the following actions to prepare for the upgrade: Check if your KDC uses DES or 3DES encryption with the krb5check open source Python scripts. See krb5check on GitHub. If you are using DES or 3DES encryption with any Kerberos principals, re-key them with a supported encryption type, such as Advanced Encryption Standard (AES). For instructions on re-keying, see Retiring DES from MIT Kerberos Documentation. Test independence from DES and 3DES by temporarily setting the following Kerberos options before upgrading: In /var/kerberos/krb5kdc/kdc.conf on the KDC, set supported_enctypes and do not include des or des3 . For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set allow_weak_crypto to false . It is false by default. For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set permitted_enctypes , default_tgs_enctypes , and default_tkt_enctypes and do not include des or des3 . If you do not experience any service interruptions with the test Kerberos settings from the step, remove them and upgrade. You do not need those settings after upgrading to the latest Kerberos packages. ( BZ#1877991 ) The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. (JIRA:RHELDOCS-16612) 5.6.10. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. (BZ#1607766) The AlternateTab extension has been removed The gnome-shell-extension-alternate-tab package, which provides the AlternateTab GNOME Shell extension, has been removed. To configure the window-switching behavior, set a keyboard shortcut in keyboard settings. For more information, see the following article: Using Alternate-Tab in Gnome 3.32 or later . (BZ#1922488) 5.6.11. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. (BZ#1569610) 5.6.12. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. ( BZ#1666722 ) 5.6.13. Red Hat Enterprise Linux System Roles The geoipupdate package has been deprecated The geoipupdate package requires a third-party subscription and it also downloads proprietary content. Therefore, the geoipupdate package has been deprecated, and will be removed in the major RHEL version. (BZ#1874892) 5.6.14. Virtualization SPICE has been deprecated The SPICE remote display protocol has become deprecated. As a result, SPICE will remain supported in RHEL 8, but Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. For advanced remote display functions, use third party tools such as RDP, HP RGS, or Mechdyne TGX. Note that the QXL graphics device, which is used by SPICE, has become deprecated as well. (BZ#1849563) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL 8 web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available the RHEL 8 web console. (JIRA:RHELPLAN-10304) Virtual machine snapshots are not properly supported in RHEL 8 The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8. Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga or virtio-vga devices instead of Cirrus VGA. (BZ#1651994) 5.6.15. Containers Podman varlink-based REST API V1 has been deprecated The Podman varlink-based REST API V1 has been deprecated upstream in favor of the new Podman REST API V2. This functionality will be removed in a later release of Red Hat Enterprise Linux 8. (JIRA:RHELPLAN-60226) 5.6.16. Deprecated packages The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux: 389-ds-base-legacy-tools authd custodia hostname libidn lorax-composer mercurial net-tools network-scripts nss-pam-ldapd sendmail yp-tools ypbind ypserv 5.7. Known issues This part describes known issues in Red Hat Enterprise Linux 8.3. 5.7.1. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. (BZ#1757877) The new osbuild-composer back end does not replicate the blueprint state from lorax-composer on upgrades Image Builder users that are upgrading from the lorax-composer back end to the new osbuild-composer back end, blueprints can disappear. As a result, once the upgrade is complete, the blueprints do not display automatically. To work around this problem, perform the following steps. Prerequisites You have the composer-cli CLI utility installed. Procedure Run the command to load the lorax-composer based blueprints into the new osbuild-composer back end: As a result, the same blueprints are now available in osbuild-composer back end. Additional resources For more details about this Known Issue, see the Image Builder blueprints are no longer present following an update to Red Hat Enterprise Linux 8.3 article. ( BZ#1897383 ) Self-signed HTTPS server cannot be used in Kickstart installation Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the --noverifyssl option is used: To work around this problem, append the inst.noverifyssl parameter to the kernel command line when starting the kickstart installation. For example: (BZ#1745064) GUI installation might fail if an attempt to unregister using the CDN is made before the repository refresh is completed Since RHEL 8.2, when registering your system and attaching subscriptions using the Content Delivery Network (CDN), a refresh of the repository metadata is started by the GUI installation program. The refresh process is not part of the registration and subscription process, and as a consequence, the Unregister button is enabled in the Connect to Red Hat window. Depending on the network connection, the refresh process might take more than a minute to complete. If you click the Unregister button before the refresh process is completed, the GUI installation might fail as the unregister process removes the CDN repository files and the certificates required by the installation program to communicate with the CDN. To work around this problem, complete the following steps in the GUI installation after you have clicked the Register button in the Connect to Red Hat window: From the Connect to Red Hat window, click Done to return to the Installation Summary window. From the Installation Summary window, verify that the Installation Source and Software Selection status messages in italics are not displaying any processing information. When the Installation Source and Software Selection categories are ready, click Connect to Red Hat . Click the Unregister button. After performing these steps, you can safely unregister the system during the GUI installation. (BZ#1821192) Registration fails for user accounts that belong to multiple organizations Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units . To work around this problem, you can either: Use a different user account that does not belong to multiple organizations. Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations. Skip the registration step in Connect to Red Hat and use Subscription Manager to register your system post-installation. ( BZ#1822880 ) RHEL installer fails to start when InfiniBand network interfaces are configured using installer boot options When you configure InfiniBand network interfaces at an early stage of RHEL installation using installer boot options (for example, to download installer image using PXE server), the installer fails to activate the network interfaces. This issue occurs because the RHEL NetworkManager fails to recognize the network interfaces in InfiniBand mode, and instead configures Ethernet connections for the interfaces. As a result, connection activation fails, and if the connectivity over the InfiniBand interface is required at an early stage, RHEL installer fails to start the installation. To workaround this issue, create a new installation media including the updated Anaconda and NetworkManager packages, using the Lorax tool. For more information about creating a new installation media including the updated Anaconda and NetworkManager packages, using the Lorax tool, see Unable to install Red Hat Enterprise Linux 8.3.0 with InfiniBand network interfaces (BZ#1890261) Anaconda installation fails when NVDIMM device namespace set to devdax mode. Anaconda installation fails with a trackback after booting with NVDIMM device namespace set to devdax mode before the GUI installation. To workaround this problem, reconfigure the NVDIMM device to set the namespace to a different mode than the devdax mode before the installation begins. As a result, you can proceed with the installation. (BZ#1891827) Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only 'Red Hat CDN' is detected). This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format. As a workaround, use either of the following solution: When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo= . To create a bootable USB device on Windows, use Fedora Media Writer. When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device. For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3 (BZ#1877697) Anaconda now shows a dialog for ldl or unformatted DASD disks in text mode Previously, during an installation in text mode, Anaconda failed to show a dialog for Linux disk layout ( ldl ) or unformatted Direct-Access Storage Device (DASD) disks. As a result, users were unable to utilize those disks for the installation. With this update, in text mode Anaconda recognizes ldl and unformatted DASD disks and shows a dialog where users can format them properly for the future utilization for the installation. (BZ#1874394) Red Hat Insights client fails to register the operating system when using the graphical installer Currently, the installation fails with an error at the end, which points to the Insights client. To work around this problem, uncheck the Connect to Red Hat Insights option during the Connect to Red Hat step before registering the systems in the installer. As a result, you can complete the installation and register to Insights afterwards by using this command: ( BZ#1931069 ) 5.7.2. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. ( BZ#1687900 ) 5.7.3. Infrastructure services libmaxminddb-devel-debuginfo.rpm is removed when running dnf update When performing the dnf update command, the binary mmdblookup tool is moved from the libmaxminddb-devel subpackage to the main libmaxmindb package. Consequently, the libmaxminddb-devel-debuginfo.rpm is removed, which might create a broken update path for this package. To work around this problem, remove the libmaxminddb-devel-debuginfo prior to the execution of the dnf update command. Note: libmaxminddb-debuginfo is the new debuginfo package. (BZ#1642001) 5.7.4. Security Users can run sudo commands as locked users In systems where sudoers permissions are defined with the ALL keyword, sudo users with permissions can run sudo commands as users whose accounts are locked. Consequently, locked and expired accounts can still be used to execute commands. To work around this problem, enable the newly implemented runas_check_shell option together with proper settings of valid shells in /etc/shells . This prevents attackers from running commands under system accounts such as bin . (BZ#1786990) GnuTLS fails to resume current session with the NSS server When resuming a TLS (Transport Layer Security) 1.3 session, the GnuTLS client waits 60 milliseconds plus an estimated round trip time for the server to send session resumption data. If the server does not send the resumption data within this time, the client creates a new session instead of resuming the current session. This incurs no serious adverse effects except for a minor performance impact on a regular session negotiation. ( BZ#1677754 ) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) File permissions of /etc/passwd- are not aligned with the CIS RHEL 8 Benchmark 1.0.0 Because of an issue with the CIS Benchmark, the remediation of the SCAP rule that ensures permissions on the /etc/passwd- backup file configures permissions to 0644 . However, the CIS Red Hat Enterprise Linux 8 Benchmark 1.0.0 requires file permissions 0600 for that file. As a consequence, the file permissions of /etc/passwd- are not aligned with the benchmark after remediation. ( BZ#1858866 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode The SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the ssh-keyscan utility from retrieving RSA keys of servers operating in that mode. To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the /etc/ssh/ssh_host_rsa_key.pub file on the server. ( BZ#1744108 ) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. (BZ#1685470) OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, Transport Security Layer (TLS) clients that use OpenSSL return a bad dh value error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with D-H parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. (BZ#1810911) Removing the rpm-plugin-selinux package leads to removing all selinux-policy packages from the system Removing the rpm-plugin-selinux package disables SELinux on the machine. It also removes all selinux-policy packages from the system. Repeated installation of the rpm-plugin-selinux package then installs the selinux-policy-minimum SELinux policy, even if the selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the rpm-plugin-selinux package. To work around this problem: Enter the umount /sys/fs/selinux/ command. Manually install the missing selinux-policy-targeted package. Edit the /etc/selinux/config file so that the policy is equal to SELINUX=enforcing . Enter the command load_policy -i . As a result, SELinux is enabled and running the same policy as before. (BZ#1641631) systemd service cannot execute commands from arbitrary paths The systemd service cannot execute commands from /home/user/bin arbitrary paths because the SELinux policy package does not include any such rule. Consequently, the custom services that are executed on non-system paths fail and eventually logs the Access Vector Cache (AVC) denial audit messages when SELinux denied access. To work around this problem, do one of the following: Execute the command using a shell script with the -c option. For example, Execute the command from a common path using /bin , /sbin , /usr/sbin , /usr/local/bin , and /usr/local/sbin common directories. ( BZ#1860443 ) rpm_verify_permissions fails in the CIS profile The rpm_verify_permissions rule compares file permissions to package default permissions. However, the Center for Internet Security (CIS) profile, which is provided by the scap-security-guide packages, changes some file permissions to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions fails. To work around this problem, manually verify that these files have the following permissions: /etc/cron.d (0700) /etc/cron.hourly (0700) /etc/cron.monthly (0700) /etc/crontab (0600) /etc/cron.weekly (0700) /etc/cron.daily (0700) ( BZ#1843913 ) Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap which might cause confusion. That is done to preserve backward compatibility with Red Hat Enterprise Linux 7. (BZ#1665082) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) OSCAP Anaconda Addon does not install all packages in text mode The OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation. To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the %packages section in your Kickstart file. As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile. ( BZ#1674001 ) OSCAP Anaconda Addon does not correctly handle customized profiles The OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section. To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation. (BZ#1691305) OSPP-based profiles are incompatible with GUI package groups. GNOME packages installed by the Server with GUI package group require the nfs-utils package that is not compliant with the Operating System Protection Profile (OSPP). As a consequence, selecting the Server with GUI package group during the installation of a system with OSPP or OSPP-based profiles, for example, Security Technical Implementation Guide (STIG), OpenSCAP displays a warning that the selected package group is not compatible with the security policy. If the OSPP-based profile is applied after the installation, the system is not bootable. To work around this problem, do not install the Server with GUI package group or any other groups that install GUI when using the OSPP profile and OSPP-based profiles. When you use the Server or Minimal Install package groups instead, the system installs without issues and works correctly. ( BZ#1787156 ) Installation with the Server with GUI or Workstation software selections and CIS security profile is not possible The CIS security profile is not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS profile is not possible. An attempted installation using the CIS profile and either of these software selections will generate the error message: To work around the problem, do not use the CIS security profile with the Server with GUI or Workstation software selections. ( BZ#1843932 ) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. ( BZ#1834716 ) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. ( BZ#1919155 ) 5.7.5. Networking The iptables utility now requests module loading for commands that update a chain regardless of the NLM_F_CREATE flag Previously, when setting a chain's policy, the iptables-nft utility generated a NEWCHAIN message but did not set the NLM_F_CREATE flag. As a consequence, the RHEL 8 kernel did not load any modules and the resulting update chain command failed if the associated kernel modules were not manually loaded. With this update, the iptables-nft utility now requests module loading for all commands that update a chain and users are able to set a chain's policy using the iptables-nft utility without manually loading the associated modules. (BZ#1812666) Support for updating packet/byte counters in the kernel was changed incorrectly between RHEL 7 and RHEL 8 When referring to an ipset command with enabled counters from an iptables rule, which specifies additional constraints on matching ipset entries, the ipset counters are updated only if all the additional constraints match. This is also problematic with --packets-gt or --bytes-gt constraints. As a result, when migrating an iptables ruleset from RHEL 7 to RHEL 8, the rules involving ipset lookups may stop working and need to be adjusted. To work around this problem, avoid using the --packets-gt or --bytes-gt options and replace them with the --packets-lt or --bytes-lt options. (BZ#1806882) Unloading XDP programs fails on Netronome network cards that use the nfp driver The nfp driver for Netronome network cards contains a bug. Therefore, unloading eXpress Data Path (XDP) programs fails if you use such cards and load the XDP program using the IFLA_XDP_EXPECTED_FD feature with the XDP_FLAGS_REPLACE flag. For example, this bug affects XDP programs that are loaded using the libxdp library. Currently, there is no workaround available for the problem. ( BZ#1880268 ) Anaconda does not have network access when using DHCP in the ip boot option The initial RAM disk ( initrd ) uses NetworkManager to manage networking. The dracut NetworkManager module provided by the RHEL 8.3 ISO file incorrectly assumes that the first field of the ip option in the Anaconda boot options is always set. As a consequence, if you use DHCP and set ip=::::<host_name>::dhcp , NetworkManager does not retrieve an IP address, and the network is not available in Anaconda. You have the following options to work around the problem: Set the first field in the ip`option to `. (period): Note that this work around will not work in future versions of RHEL when the problem has been fixed. Re-create the boot.iso file using the latest packages from the BaseOS repository that contains a fix for the bug: . . Note that Red Hat does not support self-created ISO files. As a result, RHEL retrieves an IP address from the DHCP server, and network access is available in Anaconda. (BZ#1902791) 5.7.6. Kernel The tboot-1.9.12-2 utility causes a boot failure in RHEL 8 The tboot utility of version 1.9.12-2 causes some systems with Trusted Platform Module (TPM) 2.0 to fail to boot in legacy mode. As a consequence, the system halts once it attempts to boot from the tboot Grand Unified Bootloader (GRUB) entry. To workaround this problem, downgrade to tboot of version 1.9.10. (BZ#1947839) The kernel returns false positive warnings on IBM Z systems In RHEL 8, IBM Z systems are missing a whitelist entry for the ZONE_DMA memory zone to allow user access. Consequently, the kernel returns false positive warnings such as: The warnings appear when accessing certain system information through the sysfs interface. For example, by running the debuginfo.sh script. To work around this problem, add the hardened_usercopy=off parameter to the kernel command line. As a result, no warning messages are displayed in the described scenario. (BZ#1660290) The rngd service busy wait causes total CPU consumption in FIPS mode A new kernel entropy source for FIPS mode has been added for kernels starting with version 4.18.0-193.10. Consequently, when in FIPS mode, the rngd service busy waits on the poll() system call for the /dev/random device, thereby causing consumption of 100% of CPU time. To work around this problem, stop and disable rngd by running: As a result, rngd no longer busy waits on poll() in the described scenario. (BZ#1884857) softirq changes can cause the localhost interface to drop UDP packets when under heavy load Changes in the Linux kernel's software interrupt ( softirq ) handling are done to reduce denial of service (DOS) effects. Consequently, this leads to situations where the localhost interface drops User Datagram Protocol (UDP) packets under heavy load. To work around this problem, increase the size of the network device backlog buffer to value 6000: In Red Hat tests, this value was sufficient to prevent packet loss. More heavily loaded systems might require larger backlog values. Increased backlogs have the effect of potentially increasing latency on the localhost interface. The result is to increase the buffer and allow more packets to be waiting for processing, which reduces the chances of dropping localhost packets. (BZ#1779337) A vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. (BZ#1793389) Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architectures that run on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory after a kernel crash. To work around this problem: Add irqpoll to the KDUMP_COMMANDLINE_REMOVE key in the /etc/sysconfig/kdump file. Restart the kdump service by running the systemctl restart kdump command. As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. (BZ#1654962) Debug kernel fails to boot in crash capture environment in RHEL 8 Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment. (BZ#1659609) zlib may slow down a vmcore capture in some compression functions The kdump configuration file uses the lzo compression format ( makedumpfile -l ) by default. When you modify the configuration file using the zlib compression format, ( makedumpfile -c ) it is likely to bring a better compression factor at the expense of slowing down the vmcore capture process. As a consequence, it takes the kdump upto four times longer to capture a vmcore with zlib , as compared to lzo . As a result, Red Hat recommends using the default lzo for cases where speed is the main driving factor. However, if the target machine is low on available space, zlib is a better option. (BZ#1790635) The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. (BZ#1602962) The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. (BZ#1609288) The default 7 4 1 7 printk value sometimes causes temporary system unresponsiveness The default 7 4 1 7 printk value allows for better debugging of the kernel activity. However, when coupled with a serial console, this printk setting can cause intense I/O bursts that can lead to a RHEL system becoming temporarily unresponsive. To work around this problem, we have added a new optimize-serial-console TuneD profile, which reduces the default printk value to 4 4 1 7 . Users can instrument their system as follows: Having a lower printk value persistent across a reboot reduces the likelihood of system hangs. Note that this setting change comes at the expense of losing the extra debugging information. For more information about the newly added feature, see A new optimize-serial-console TuneD profile to reduce I/O to serial consoles by lowering the printk value . (JIRA:RHELPLAN-28940) The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. (BZ#1868526) The OPEN MPI library may trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. (BZ#1866402) 5.7.7. File systems and storage The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. (BZ#1496229) LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. ( BZ#1768536 ) Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. (JIRA:RHELPLAN-27987, BZ#1798631 , BZ#1808012) LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . (BZ#1730502) An NFS 4.0 patch can result in reduced performance under an open-heavy workload Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely. (BZ#1748451) 5.7.8. Dynamic programming languages, web and database servers getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. ( BZ#1803161 ) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. With this update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1819607) PAM plug-in does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream, which is available with RHEL 8.4. ( BZ#1942330 ) 5.7.9. Identity Management Installing KRA fails if all KRA members are hidden replicas The ipa-kra-install utility fails on a cluster where the Key Recovery Authority (KRA) is already present, if the first KRA instance is installed on a hidden replica. Consequently, you cannot add further KRA instances to the cluster. To work around this problem, unhide the hidden replica that has the KRA role before you add new KRA instances. You can hide it again when ipa-kra-install completes successfully. ( BZ#1816784 ) Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) Certificates issued by PKI ACME Responder connected to PKI CA may fail OCSP validation The default ACME certificate profile provided by PKI CA contains a sample OCSP URL that does not point to an actual OCSP service. As a consequence, if PKI ACME Responder is configured to use a PKI CA issuer, the certificates issued by the responder may fail OCSP validation. To work around this problem, you need to set the policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0 property to a blank value in the /usr/share/pki/ca/profiles/ca/acmeServerCert.cfg configuration file: In the ACME Responder configuration file, change the line policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0=http://ocsp.example.com to policyset.serverCertSet.5.default.params.authInfoAccessADLocation_0= . Restart the service and regenerate the certificate. As a result, PKI CA will generate ACME certificates with an autogenerated OCSP URL that points to an actual OCSP service. ( BZ#1868233 ) FreeRADIUS silently truncates Tunnel-Passwords longer than 249 characters If a Tunnel-Password is longer than 249 characters, the FreeRADIUS service silently truncates it. This may lead to unexpected password incompatibilities with other systems. To work around the problem, choose a password that is 249 characters or fewer. ( BZ#1723362 ) The /var/log/lastlog sparse file on IdM hosts can cause performance problems During the IdM installation, a range of 200,000 UIDs from a total of 10,000 possible ranges is randomly selected and assigned. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, having high UIDs can create problems with the /var/log/lastlog file. For example, if a user with the UID of 1280000008 logs in to an IdM client, the local /var/log/lastlog file size increases to almost 400 GB. Although the actual file is sparse and does not use all that space, certain applications are not designed to identify sparse files by default and may require a specific option to handle them. For example, if the setup is complex and a backup and copy application does not handle sparse files correctly, the file is copied as if its size was 400 GB. This behavior can cause performance problems. To work around this problem: In case of a standard package, refer to its documentation to identify the option that handles sparse files. In case of a custom application, ensure that it is able to manage sparse files such as /var/log/lastlog correctly. (JIRA:RHELPLAN-59111) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 5.7.10. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host. (BZ#1583445) 5.7.11. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) Multiple HDR displays on a single MST topology may not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. (BZ#1812577) Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. ( BZ#1673073 ) VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. ( BZ#1886147 ) Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. (JIRA:RHELPLAN-57914) The RHEL installer becomes unresponsive with NVIDIA Ampere RHEL 8.3.0 does not support the NVIDIA Ampere GPUs. If you start the RHEL installation on a system that has an NVIDIA Ampere GPU, the installer becomes unresponsive. As a consequence, the installation cannot finish successfully. The NVIDIA Ampere family includes the following GPU models: GeForce RTX 3060 Ti GeForce RTX 3070 GeForce RTX 3080 GeForce RTX 3090 RTX A6000 NVIDIA A40 NVIDIA A100 NVIDIA A100 80GB To work around the problem, disable the nouveau graphics driver and install RHEL in text mode: Boot into the boot menu of the installer. Add the nouveau.modeset=0 option on the kernel command line. For details, see Editing boot options . Install RHEL on the system. Boot into the newly installed RHEL. At the boot menu, add the nouveau.modeset=0 option on the kernel command line. Disable the nouveau driver permanently: As a result, the installation has finished successfully and RHEL now runs in text mode. Optionally, you can install the proprietary NVIDIA GPU driver to enable graphics. For instructions, see How to install the NVIDIA proprietary driver on RHEL 8 . (BZ#1903890) 5.7.12. The web console Unprivileged users can access the Subscriptions page If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message Cockpit had an unexpected internal error . To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox. ( BZ#1674337 ) 5.7.13. Red Hat Enterprise Linux system roles oVirt input and the elasticsearch output functionalities are not supported in system roles Logging The oVirt input and the elasticsearch output are not supported in system roles Logging although they are mentioned in the README file. There is no workaround available at the moment. ( BZ#1889468 ) 5.7.14. Virtualization Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL Using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely. To work around this problem, use virtio-gpu instead of qxl as the GPU device for VMs that use Wayland. (BZ#1642887) virsh iface-\* commands do not work consistently Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications. (BZ#1664592) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 ) Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . (BZ#1777138) Virtual machines using Cooperlake cannot boot when TSX is disabled on the host Virtual machines (VMs) that use the Cooperlake CPU model currently fail to boot when the TSX CPU flag is diabled on the host. Instead, the host displays the following error message: To make VMs with Cooperlake usable on such host, disable the HLE, RTM, and TAA_NO flags in the VM configuration in the VM's XML configuration: ( BZ#1860743 ) Virtual machines sometimes cannot boot on Witherspoon hosts Virtual machines (VMs) that use the pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU. Attempting to boot such a VM instead generates the following error message: To work around this problem, configure the VM's XML configuration as follows: ( BZ#1732726 ) 5.7.15. RHEL in cloud environments GPU problems on Azure NV6 instances When running RHEL 8 as a guest operating system on a Microsoft Azure NV6 instance, resuming the virtual machine (VM) from hibernation sometimes causes the VM's GPU to work incorrectly. When this occurs, the kernel logs the following message: (BZ#1846838) kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: (BZ#1865745) Setting static IP in a RHEL 8 virtual machine on a VMWare host does not work Currently, when using RHEL 8 as a guest operating system of a virtual machine (VM) on a VMWare host, the DatasourceOVF function does not work correctly. As a consequence, if you use use the cloud-init utility to set the the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. ( BZ#1750862 ) Core dumping RHEL 8 virtual machines with certain NICs to a remote machine on Azure takes longer than expected Currently, using the kdump utility to save the core dump file of a RHEL 8 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine does not work correctly when the VM is using a NIC with enabled accelerated networking. As a consequence, the dump file is saved after approximately 200 seconds, instead of immediately. In addition, the following error message is logged on the console before the dump file is saved. (BZ#1854037) TX/RX packet counters do not increase after virtual machines resume from hibernation The TX/RX packet counters stop increasing when a RHEL 8 virtual machine (VM), with a CX4 VF NIC, resumes from hibernation on Microsoft Azure. To keep the counters working, restart the VM. Note that, doing so will reset the counters. (BZ#1876527) RHEL 8 virtual machines fail to resume from hibernation on Azure The GUID of the virtual function (VF), vmbus device , changes when a RHEL 8 virtual machine (VM), with SR-IOV enabled, is hibernated and deallocated on Microsoft Azure . As a result, when the VM is restarted, it fails to resume and crashes. As a workaround, hard reset the VM using the Azure serial console. (BZ#1876519) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a "Migration status: active" status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) 5.7.16. Supportability redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) 5.7.17. Containers UDICA is not expected to work with 1.0 stable stream UDICA, the tool to generate SELinux policies for containers, is not expected to work with containers that are run via podman 1.0.x in the container-tools:1.0 module stream. (JIRA:RHELPLAN-25571) podman system connection add does not automatically set the default connection The podman system connection add command does not automatically set the first connection to be the default connection. To set the default connection, you must manually run the command podman system connection default <connection_name> . ( BZ#1881894 )
[ "tuned-adm profile throughput-performance optimize-serial-console", "cat /proc/sys/kernel/printk", "echo 'add_dracutmodules+=\" network-legacy \"' > /etc/dracut.conf.d/enable-network-legacy.conf dracut -vf --regenerate-all", "ip=__IP_address__:__peer__:__gateway_IP_address__:__net_mask__:__host_name__:__interface_name__:__configuration_method__", "grubby --args=\"page_owner=on\" --update-kernel=0 reboot", "perf script record flamegraph -F 99 -g -- stress --cpu 1 --vm-bytes 128M --timeout 10s stress: info: [4461] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd stress: info: [4461] successful run completed in 10s [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.060 MB perf.data (970 samples) ] perf script report flamegraph dumping data to flamegraph.html", "pcs resource op add my-rsc promote on-fail=\"demote\"", "pcs resource op add my-rsc monitor interval=\"10s\" on-fail=\"demote\" role=\"Master\"", "yum module install ruby:2.7", "yum module install nodejs:14", "yum module install php:7.4", "yum module install nginx:1.18", "CustomLog journald:priority format|nickname", "CustomLog journald:info combined", "sudo dnf install -y dotnet-sdk-5.0", "yum install gcc-toolset-10", "scl enable gcc-toolset-10 tool", "scl enable gcc-toolset-10 bash", "podman pull registry.redhat.io/<image_name>", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "update-crypto-policies --set DEFAULT:AD-SUPPORT", "yum install gnome-session-kiosk-session", "#!/bin/sh gedit &", "chmod +x USDHOME/.local/bin/redhat-kiosk", "ip link add link en2 name mymacvtap0 address 52:54:00:11:11:11 type macvtap mode bridge chown myuser /dev/tapUSD(cat /sys/class/net/mymacvtap0/ifindex) ip link set mymacvtap0 up", "<interface type='ethernet'> <model type='virtio'/> <mac address='52:54:00:11:11:11'/> <target dev='mymacvtap0' managed='no'/> </interface>", "<channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "hugepagesz=2M hugepages=512", "hugepages=256 hugepagesz=2M hugepages=512", "hugepages=256 default_hugepagesz=2M hugepages=256 hugepages=256 default_hugepagesz=2M", "[<domain>:]<bus>:<dev>.<func>[/<dev>.<func>]* pci:<vendor>:<device>[:<subvendor>:<subdevice>]", "SyntaxError: Missing parentheses in call to 'print'", "xfs_info /mount-point | grep ftype", "i915.force_probe= pci-id", "<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>", "wget --trust-server-names --input-metalink`", "wget --trust-server-names --input-metalink <(curl -s USDURL)", "update-crypto-policies --set LEGACY", "~]# yum install network-scripts", "for blueprint in USD(find /var/lib/lorax/composer/blueprints/git/workspace/master -name '*.toml'); do composer-cli blueprints push \"USD{blueprint}\"; done", "url --url=https://SERVER/PATH --noverifyssl", "inst.ks=<URL> inst.noverifyssl", "insights-client --register", "dnf module enable libselinux-python dnf install libselinux-python", "dnf module install libselinux-python:2.8/common", "SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2", "bash -c command", "package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL", "update-crypto-policies --set DEFAULT:NO-CAMELLIA", "ip=.::::<host_name>::dhcp", "lorax '--product=Red Hat Enterprise Linux' --version=8.3 --release=8.3 --source=<URL_to_BaseOS_repository> --source=<URL_to_AppStream_repository> --nomacboot --buildarch=x86_64 '--volid=RHEL 8.3' <output_directory>", "Bad or missing usercopy whitelist? Kernel memory exposure attempt detected from SLUB object 'dma-kmalloc-192' (offset 0, size 144)! WARNING: CPU: 0 PID: 8519 at mm/usercopy.c:83 usercopy_warn+0xac/0xd8", "systemctl stop rngd systemctl disable rngd", "echo 6000 > /proc/sys/net/core/netdev_max_backlog", "systemctl restart kdump.service", "tuned-adm profile throughput-performance optimize-serial-console", "[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]", "03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us", "-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0", "-mca pml_ucx_priority 5", "The guest operating system reported that it failed with the following error code: 0x1E", "dracut_args --omit-drivers \"radeon\" force_rebuild 1", "echo 'blacklist nouveau' >> /etc/modprobe.d/blacklist.conf", "the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm", "<feature policy='disable' name='hle'/> <feature policy='disable' name='rtm'/> <feature policy='disable' name='taa-no'/>", "qemu-kvm: Requested safe indirect branch capability level not supported by kvm", "<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <qemu:commandline> <qemu:arg value='-machine'/> <qemu:arg value='cap-ibs=workaround'/> </qemu:commandline>", "hv_irq_unmask() failed: 0x5", "echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers", "device (eth0): linklocal6: DAD failed for an EUI-64 address" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.3_release_notes/RHEL-8-3-0-release
21.3.11. Additional Resources
21.3.11. Additional Resources To learn more about printing on Red Hat Enterprise Linux, see the following resources. 21.3.11.1. Installed Documentation man lp The manual page for the lp command that allows you to print files from the command line. man lpr The manual page for the lpr command that allows you to print files from the command line. man cancel The manual page for the command-line utility to remove print jobs from the print queue. man mpage The manual page for the command-line utility to print multiple pages on one sheet of paper. man cupsd The manual page for the CUPS printer daemon. man cupsd.conf The manual page for the CUPS printer daemon configuration file. man classes.conf The manual page for the class configuration file for CUPS. man lpstat The manual page for the lpstat command, which displays status information about classes, jobs, and printers. 21.3.11.2. Useful Websites http://www.linuxprinting.org/ GNU/Linux Printing contains a large amount of information about printing in Linux. http://www.cups.org/ Documentation, FAQs, and newsgroups about CUPS.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-printing-additional-resources
9.16. Write Changes to Disk
9.16. Write Changes to Disk The installer prompts you to confirm the partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux. Figure 9.47. Writing storage configuration to disk If you are certain that you want to proceed, click Write changes to disk . Warning Up to this point in the installation process, the installer has made no lasting changes to your computer. When you click Write changes to disk , the installer will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, click Go back . To cancel installation completely, switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. After you click Write changes to disk , allow the installation process to complete. If the process is interrupted (for example, by you switching off or resetting the computer, or by a power outage) you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/write_changes_to_disk-x86
Chapter 20. Configuring Identity
Chapter 20. Configuring Identity Director includes parameters to help configure Identity Service (keystone) settings: 20.1. Region name By default, your overcloud region is named regionOne . You can change this by adding a KeystoneRegion entry your environment file. You cannot modify this value after you deploy the overcloud.
[ "parameter_defaults: KeystoneRegion: 'SampleRegion'" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/advanced_overcloud_customization/assembly_configuring-identity
Chapter 2. Differences from upstream OpenJDK 11
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.16/rn-openjdk-diff-from-upstream
A.3. Install and Configure keepalived
A.3. Install and Configure keepalived Perform the following procedure on your two HAProxy nodes: Install keepalived. Configure keepalived. In the following configuration, there is a script to check the HAProxy processes. The instance uses eth0 as the network interface and configures haproxy as the master server and haproxy2 as the backup server. It also assigns a virtual IP address of 192.168.0.100. Enable and start keepalived.
[ "yum install -y keepalived", "vim /etc/keepalived/keepalived.conf", "vrrp_script chk_haproxy { script \"killall -0 haproxy\" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK } vrrp_instance VI_1 { interface eth0 # interface to monitor state MASTER # MASTER on haproxy, BACKUP on haproxy2 virtual_router_id 51 priority 101 # 101 on haproxy, 100 on haproxy2 virtual_ipaddress { 192.168.0.100 # virtual ip address } track_script { chk_haproxy } }", "systemctl enable keepalived systemctl start keepalived" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/keepalived_install_example1
Chapter 31. Working with Kernel Modules
Chapter 31. Working with Kernel Modules The Linux kernel is modular, which means it can extend its capabilities through the use of dynamically-loaded kernel modules . A kernel module can provide: a device driver which adds support for new hardware; or, support for a file system such as btrfs or NFS . Like the kernel itself, modules can take parameters that customize their behavior, though the default parameters work well in most cases. User-space tools can list the modules currently loaded into a running kernel; query all available modules for available parameters and module-specific information; and load or unload (remove) modules dynamically into or from a running kernel. Many of these utilities, which are provided by the module-init-tools package, take module dependencies into account when performing operations so that manual dependency-tracking is rarely necessary. On modern systems, kernel modules are automatically loaded by various mechanisms when the conditions call for it. However, there are occasions when it is necessary to load and/or unload modules manually, such as when a module provides optional functionality, one module should be preferred over another although either could provide basic functionality, or when a module is misbehaving, among other situations. This chapter explains how to: use the user-space module-init-tools package to display, query, load and unload kernel modules and their dependencies; set module parameters both dynamically on the command line and permanently so that you can customize the behavior of your kernel modules; and, load modules at boot time. Note In order to use the kernel module utilities described in this chapter, first ensure the module-init-tools package is installed on your system by running, as root: For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . 31.1. Listing Currently-Loaded Modules You can list all kernel modules that are currently loaded into the kernel by running the lsmod command: Each row of lsmod output specifies: the name of a kernel module currently loaded in memory; the amount of memory it uses; and, the sum total of processes that are using the module and other modules which depend on it, followed by a list of the names of those modules, if there are any. Using this list, you can first unload all the modules depending the module you want to unload. For more information, see Section 31.4, "Unloading a Module" . Finally, note that lsmod output is less verbose and considerably easier to read than the content of the /proc/modules pseudo-file.
[ "~]# yum install module-init-tools", "~]USD lsmod Module Size Used by xfs 803635 1 exportfs 3424 1 xfs vfat 8216 1 fat 43410 1 vfat tun 13014 2 fuse 54749 2 ip6table_filter 2743 0 ip6_tables 16558 1 ip6table_filter ebtable_nat 1895 0 ebtables 15186 1 ebtable_nat ipt_MASQUERADE 2208 6 iptable_nat 5420 1 nf_nat 19059 2 ipt_MASQUERADE,iptable_nat rfcomm 65122 4 ipv6 267017 33 sco 16204 2 bridge 45753 0 stp 1887 1 bridge llc 4557 2 bridge,stp bnep 15121 2 l2cap 45185 16 rfcomm,bnep cpufreq_ondemand 8420 2 acpi_cpufreq 7493 1 freq_table 3851 2 cpufreq_ondemand,acpi_cpufreq usb_storage 44536 1 sha256_generic 10023 2 aes_x86_64 7654 5 aes_generic 27012 1 aes_x86_64 cbc 2793 1 dm_crypt 10930 1 kvm_intel 40311 0 kvm 253162 1 kvm_intel [output truncated]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-working_with_kernel_modules
probe::ipmib.ReasmReqds
probe::ipmib.ReasmReqds Name probe::ipmib.ReasmReqds - Count number of packet fragments reassembly requests Synopsis ipmib.ReasmReqds Values op value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global ReasmReqds (equivalent to SNMP's MIB IPSTATS_MIB_REASMREQDS)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-reasmreqds
Chapter 4. Managing IdM service secrets: storing and retrieving secrets
Chapter 4. Managing IdM service secrets: storing and retrieving secrets This section shows how an administrator can use a service vault in Identity Management (IdM) to securely store a service secret in a centralized location. The vault used in the example is asymmetric, which means that to use it, the administrator needs to perform the following steps: Generate a private key using, for example, the openssl utility. Generate a public key based on the private key. The service secret is encrypted with the public key when an administrator archives it into the vault. Afterwards, a service instance hosted on a specific machine in the domain retrieves the secret using the private key. Only the service and the administrator are allowed to access the secret. If the secret is compromised, the administrator can replace it in the service vault and then redistribute it to those individual service instances that have not been compromised. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . This section includes these procedure Storing an IdM service secret in an asymmetric vault Retrieving a service secret for an IdM service instance Changing an IdM service vault secret when compromised Terminology used In the procedures: admin is the administrator who manages the service password. private-key-to-an-externally-signed-certificate.pem is the file containing the service secret, in this case a private key to an externally signed certificate. Do not confuse this private key with the private key used to retrieve the secret from the vault. secret_vault is the vault created for the service. HTTP/webserver.idm.example.com is the service whose secret is being archived. service-public.pem is the service public key used to encrypt the password stored in password_vault . service-private.pem is the service private key used to decrypt the password stored in secret_vault . 4.1. Storing an IdM service secret in an asymmetric vault Follow this procedure to create an asymmetric vault and use it to archive a service secret. Prerequisites You know the IdM administrator password. Procedure Log in as the administrator: Obtain the public key of the service instance. For example, using the openssl utility: Generate the service-private.pem private key. Generate the service-public.pem public key based on the private key. Create an asymmetric vault as the service instance vault, and provide the public key: The password archived into the vault will be protected with the key. Archive the service secret into the service vault: This encrypts the secret with the service instance public key. Repeat these steps for every service instance that requires the secret. Create a new asymmetric vault for each service instance. 4.2. Retrieving a service secret for an IdM service instance Follow this procedure to use a service instance to retrieve the service vault secret using a locally-stored service private key. Prerequisites You have access to the keytab of the service principal owning the service vault, for example HTTP/webserver.idm.example.com. You have created an asymmetric vault and archived a secret in the vault . You have access to the private key used to retrieve the service vault secret. Procedure Log in as the administrator: Obtain a Kerberos ticket for the service: Retrieve the service vault password: 4.3. Changing an IdM service vault secret when compromised Follow this procedure to isolate a compromised service instance by changing the service vault secret. Prerequisites You know the IdM administrator password. You have created an asymmetric vault to store the service secret. You have generated the new secret and have access to it, for example in the new-private-key-to-an-externally-signed-certificate.pem file. Procedure Archive the new secret into the service instance vault: This overwrites the current secret stored in the vault. Retrieve the new secret on non-compromised service instances only. For details, see Retrieving a service secret for an IdM service instance . 4.4. Additional resources See Using Ansible to manage IdM service vaults: storing and retrieving secrets .
[ "kinit admin", "openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)", "openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key", "ipa vault-add secret_vault --service HTTP/webserver.idm.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"secret_vault\" ---------------------------- Vault name: secret_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]", "ipa vault-archive secret_vault --service HTTP/webserver.idm.example.com --in private-key-to-an-externally-signed-certificate.pem ----------------------------------- Archived data into vault \"secret_vault\" -----------------------------------", "kinit admin", "kinit HTTP/webserver.idm.example.com -k -t /etc/httpd/conf/ipa.keytab", "ipa vault-retrieve secret_vault --service HTTP/webserver.idm.example.com --private-key-file service-private.pem --out secret.txt ------------------------------------ Retrieved data from vault \"secret_vault\" ------------------------------------", "ipa vault-archive secret_vault --service HTTP/webserver.idm.example.com --in new-private-key-to-an-externally-signed-certificate.pem ----------------------------------- Archived data into vault \"secret_vault\" -----------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_vaults_in_identity_management/managing-idm-service-vaults-storing-and-retrieving-secrets_working-with-vaults-in-identity-management
OAuth APIs
OAuth APIs OpenShift Container Platform 4.13 Reference guide for Oauth APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/oauth_apis/index
Chapter 3. Networking dashboards
Chapter 3. Networking dashboards Networking metrics are viewable in dashboards within the OpenShift Container Platform web console, under Observe Dashboards . 3.1. Network Observability Operator If you have the Network Observability Operator installed, you can view network traffic metrics dashboards by selecting the Netobserv dashboard from the Dashboards drop-down list. For more information about metrics available in this Dashboard , see Network Observability metrics dashboards . 3.2. Networking and OVN-Kubernetes dashboard You can view both general networking metrics as well as OVN-Kubernetes metrics from the dashboard. To view general networking metrics, select Networking/Linux Subsystem Stats from the Dashboards drop-down list. You can view the following networking metrics from the dashboard: Network Utilisation , Network Saturation , and Network Errors . To view OVN-Kubernetes metrics select Networking/Infrastructure from the Dashboards drop-down list. You can view the following OVN-Kuberenetes metrics: Networking Configuration , TCP Latency Probes , Control Plane Resources , and Worker Resources . 3.3. Ingress Operator dashboard You can view networking metrics handled by the Ingress Operator from the dashboard. This includes metrics like the following: Incoming and outgoing bandwidth HTTP error rates HTTP server response latency To view these Ingress metrics, select Networking/Ingress from the Dashboards drop-down list. You can view Ingress metrics for the following categories: Top 10 Per Route , Top 10 Per Namespace , and Top 10 Per Shard .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/networking-dashboards_accessing-hosts
Appendix A. RHEL 8 repositories
Appendix A. RHEL 8 repositories Before the upgrade, ensure you have appropriate repositories enabled as described in step 4 of the procedure in Preparing a RHEL 8 system for the upgrade . If you plan to use Red Hat Subscription Manager during the upgrade, you must enable the following repositories before the upgrade by using the subscription-manager repos --enable repository_id command: Table A.1. RHEL 8 repositories Architecture Repository Repository ID 64-bit Intel and AMD Base rhel-8-for-x86_64-baseos-rpms AppStream rhel-8-for-x86_64-appstream-rpms 64-bit ARM Base rhel-8-for-aarch64-baseos-rpms Extras rhel-8-for-aarch64-appstream-rpms IBM POWER (little endian) Base rhel-8-for-ppc64le-baseos-rpms AppStream rhel-8-for-ppc64le-appstream-rpms IBM Z Base rhel-8-for-s390x-baseos-rpms AppStream rhel-8-for-s390x-appstream-rpms You can enable the following repositories before the upgrade by using the subscription-manager repos --enable repository_id command: Table A.2. Voluntary RHEL 8 repositories Architecture Repository Repository ID 64-bit Intel and AMD Code Ready Linux Builder codeready-builder-for-rhel-8-x86_64-rpms Supplementary rhel-8-for-x86_64-supplementary-rpms 64-bit ARM Code Ready Linux Builder codeready-builder-for-rhel-8-aarch64-rpms Supplementary rhel-8-for-aarch64-supplementary-rpms IBM POWER (little endian) Code Ready Linux Builder codeready-builder-for-rhel-8-ppc64le-rpms Supplementary rhel-8-for-ppc64le-supplementary-rpms IBM Z Code Ready Linux Builder codeready-builder-for-rhel-8-s390x-rpms Supplementary rhel-8-for-s390x-supplementary-rpms Note If you have enabled a RHEL 8 Code Ready Linux Builder or a RHEL 8 Supplementary repository before an in-place upgrade, Leapp enables the RHEL 8 CodeReady Linux Builder or the RHEL 8 Supplementary repositories, respectively. For more information, see the Package manifest . If you decide to use custom repositories, enable them per instructions in Configuring custom repositories .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/appendix-rhel-8-repositories_upgrading-from-rhel-8-to-rhel-9
Chapter 30. Managing log data
Chapter 30. Managing log data Red Hat Process Automation Manager manages the required maintenance runtime data. It removes some data automatically, including the following data types: Process instance data , which is removed upon process instance completion. Work item data , which is removed upon work item completion. Task instance data , which is removed upon completion of a process to which the given task belongs. Runtime data that is not cleaned automatically includes session information data that is based on the selected runtime strategy. Singleton strategy ensures that runtime data of session information is not automatically removed. Per request strategy allows automatic removal when a request is terminated. Per process instances automatically removes process instance data when a process instance is mapped to a session that is completed or aborted. Red Hat Process Automation Manager also provides audit log data tables. You can use these tables to keep track of current and past process instances. By default, Red Hat Process Automation Manager does not remove any data from audit log tables. There are three ways to manage and maintain the audit data tables: You can set up an automatic cleanup of these tables using Business Central, as described in Section 30.1, "Setting up automatic cleanup job" . You can remove information from the tables manually using the Java API, as described in Section 30.2, "Manual cleanup" . You can run a custom query on the Red Hat Process Automation Manager database, including the audit log tables, as described in Section 30.4, "Running a custom query on the Red Hat Process Automation Manager database" . 30.1. Setting up automatic cleanup job You can set up an automatic cleanup job in Business Central. Procedure In Business Central, go to Manage > Jobs . Click New Job . Enter values for Business Key , Due On , and Retries fields. Enter the following command into the Type field. To configure parameters, complete the following steps: Click the Advanced tab. Click Add Parameter . In the Key column, enter a parameter. In the Value column, enter a parameter. For the list of parameters for the command, see Section 30.3, "Removing logs from the database" . Click Create . Business Central creates the automatic cleanup job. 30.2. Manual cleanup To perform manual cleanup, you can use the audit Java API. The audit API consists of the following areas: Table 30.1. Audit API areas Name Description Process audit It is used to clean up process, node and variable logs that are accessible in the jbpm-audit module. For example, you can access the module as follows: org.jbpm.process.audit.JPAAuditLogService Task audit It is used to clean up tasks and events that are accessible in the jbpm-human-task-audit module. For example, you can access the module as follows: org.jbpm.services.task.audit.service.TaskJPAAuditService Executor jobs It is used to clean up executor jobs and errors that are accessible in the jbpm-executor module. For example, you can access the module as follows: org.jbpm.executor.impl.jpa.ExecutorJPAAuditService 30.3. Removing logs from the database Use LogCleanupCommand executor command to clean up the data, which is using the database space. The LogCleanupCommand consists of logic to automatically clean up all or selected data. There are several configuration options that you can use with the LogCleanupCommand : Table 30.2. LogCleanupCommand parameters table Name Description Is Exclusive SkipProcessLog Indicates whether process and node instances, and process variables log cleanup is skipped when the command runs. The default value is false . No, it is used with other parameters. SkipTaskLog Indicates if the task audit and event log cleanup are skipped. The default value is false . No, it is used with other parameters. SkipExecutorLog Indicates if Red Hat Process Automation Manager executor entries cleanup is skipped. The default value is false . No, it is used with other parameters. SingleRun Indicates if a job routine runs only once. The default value is false . No, it is used with other parameters. NextRun Schedules the job execution. The default value is 24h . For example, set to 12h for jobs to be executed every 12 hours. The schedule is ignored if you set SingleRun to true , unless you set both SingleRun and NextRun . If both are set, the NextRun schedule takes priority. The ISO format can be used to set the precise date. No, it is used with other parameters. OlderThan Logs that are older than the specified date are removed. The date format is YYYY-MM-DD . Usually, this parameter is used for single run jobs. Yes, it is not used with OlderThanPeriod parameter. OlderThanPeriod Logs that are older than the specified timer expression are removed. For example, set 30d to remove logs, which are older than 30 days. Yes, it is not used with OlderThan parameter. ForProcess Specifies process definition ID for logs that are removed. No, it is used with other parameters. RecordsPerTransaction Indicates the number of records in a transaction that is removed. The default value is 0 , indicating all the records. No, it is used with other parameters. ForDeployment Specifies deployment ID of the logs that are removed. No, it is used with other parameters. EmfName Persistence unit name that is used to perform delete operation. Not applicable Note LogCleanupCommand does not remove any active instances, such as running process instances, task instances, or executor jobs. 30.4. Running a custom query on the Red Hat Process Automation Manager database You can use the ExecuteSQLQueryCommand executor command to run a custom query on the Red Hat Process Automation Manager database, including the audit log data tables. You can set up a job that runs this command in Business Central. Procedure In Business Central, select Manage > Jobs . Click New Job . Enter values for Business Key , Due On , and Retries fields. Enter the following command into the Type field. To configure parameters, complete the following steps: Open the Advanced tab. Click Add Parameter . In the Key column, enter a parameter value. In the Value column, enter a parameter value. For the list of parameters for the command, see Section 30.4.1, "Parameters for the ExecuteSQLQueryCommand command" . Click Create . Business Central creates the custom query job. Optional: If you want to retrieve the results of the query, complete the following steps: In the list of jobs that Business Central displays, find the job that you started. If the job is not present in the list, remove any filters from the Active filters list. Record the id value for the job. Using a web browser, access the Swagger documentation on your KIE Server at <kie_server_address>/docs , for example, http://localhost:8080/kie-server/docs/ . Click the GET /server/jobs/{jobId} request. In the jobId field, enter the id value that you recorded. From the withErrors list, select true . From the withData list, select true . Click Execute . Review the Server response field. If the SQL query succeeded, the result is under the "response-data" key. 30.4.1. Parameters for the ExecuteSQLQueryCommand command The ExecuteSQLQueryCommand executor command runs a custom query on the Red Hat Process Automation Manager database, including the audit log tables. For the schema for the audit log tables, see Process engine in Red Hat Process Automation Manager . You can configure the following parameters for the ExecuteSQLQueryCommand command. Table 30.3. ExecuteSQLQueryCommand parameters table Name Description SingleRun true if the query can be triggered once. false if the query can be triggered multiple times. EmfName name of the persistence unit to be used to run the query businessKey The business key to use with the query. If configuring the command in Business Central, use the business key that you set for the job SQL The native SQL query to execute. Preface parameters with the : character parametersList List of all parameters in the SQL query. Separate the parameters with the , character SQL parameter name The value for the SQL parameter. Create a separate command parameter for every SQL parameter For example, you might use a query with two parameters: SELECT * FROM RequestInfo WHERE id = :paramId AND businessKey = :paramKey Set the following parameters for the ExecuteSQLQueryCommand command: SQL : SELECT * FROM RequestInfo WHERE id = :paramId AND businessKey = :paramKey ; parametersList : paramId,paramKey paramId : The value for id paramKey : The value for businessKey
[ "org.jbpm.executor.commands.LogCleanupCommand", "org.jbpm.executor.commands.ExecuteSQLQueryCommand", "SELECT * FROM RequestInfo WHERE id = :paramId AND businessKey = :paramKey" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/manage-log-file-proc
Chapter 26. Headless Systems
Chapter 26. Headless Systems When installing headless systems, you can only choose between an automated Kickstart installation and an interactive VNC installation using Connect Mode. For more information about automated Kickstart installation, see Section 27.3.1, "Kickstart Commands and Options" . The general process for interactive VNC installation is described below. Set up a network boot server to start the installation. Information about installing and performing basic configurating of a network boot server can be found in Chapter 24, Preparing for a Network Installation . Configure the server to use the boot options for a Connect Mode VNC installation. For information on these boot options, see Section 25.2.2, "Installing in VNC Connect Mode" . Follow the procedure for VNC Installation using Connect Mode as described in the procedure Procedure 25.2, "Starting VNC in Connect Mode" . However, when directed to boot the system, boot it from the network server instead of local media.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-headless-installations
5.4. Manually Configuring a Linux Client
5.4. Manually Configuring a Linux Client The ipa-client-install command automatically configures services like Kerberos, SSSD, PAM, and NSS. However, if the ipa-client-install command cannot be used on a system for some reason, then the IdM client entries and the services can be configured manually. 5.4.1. Setting up an IdM Client (Full Procedure) Install SSSD, if it is not already installed. Optional. Install the IdM tools so that administrative tasks can be performed from the host. [root@client ~]# yum install ipa-admintools On an IdM server. Create a host entry for the client. Creating hosts manually is covered in Section 5.4.2, "Other Examples of Adding a Host Entry" . On an IdM server. Create keytabs for the client. Log in as IdM administrator. Set the client host to be managed by the server. Generate the keytab for the client. Copy the keytab to the client machine and rename it /etc/krb5.keytab . Note If there is an existing /etc/krb5.keytab that should be preserved, the two files can be combined using ktutil . Set the correct user permissions for the /etc/krb5.keytab file. Set the SELinux contexts for the /etc/krb5.keytab file. Configure SSSD by editing the /etc/sssd/sssd.conf file to point to the IdM domain. Configure NSS to use SSSD for passwords, groups, users, and netgroups. Configure the /etc/krb5.conf file to point to the IdM KDC. Update the /etc/pam.d configuration to use the pam_sss.so modules. For /etc/pam.d/fingerprint-auth : For /etc/pam.d/system-auth : For /etc/pam.d/password-auth : Enrollment_with_Separation_of_DutiesFor /etc/pam.d/smartcard-auth : Install the IdM server's CA certificate. Obtain the certificate from the server. Install the certificate in the system's NSS database. Set up a host certificate for the host in IdM. Make sure certmonger is running. Note Configure chkconfig so that the certmonger service starts by default. Use the ipa-getcert command, which creates and manages the certificate through certmonger . The options are described more in Section B.1, "Requesting a Certificate with certmonger" . If administrative tools were not installed on the client, then the certificate can be generated on an IdM server, copied over to the host, and installed using certutil . Set up NFS to work with Kerberos. Note To help troubleshoot potential NFS setup errors, enable debug information in the /etc/sysconfig/nfs file. On an IdM server, add an NFS service principal for the NFS client. [root@ipaclient ~]# ipa service-add nfs/ipaclient.example.com@EXAMPLE Note This must be run from a machine with the ipa-admintools package installed so that the ipa command is available. On the IdM server, obtain a keytab for the NFS service principal. [root@ipaclient ~]# ipa-getkeytab -s server.example.com -p nfs/ipaclient.example.com@EXAMPLE -k /tmp/krb5.keytab Note Some versions of the Linux NFS implementation have limited encryption type support. If the NFS server is hosted on a version older than Red Hat Enterprise Linux 6, use the -e des-cbc-crc option to the ipa-getkeytab command for any nfs/<FQDN> service keytabs to set up, both on the server and on all clients. This instructs the KDC to generate only DES keys. When using DES keys, all clients and servers that rely on this encryption type need to have the allow_weak_crypto option enabled in the [libdefaults] section of the /etc/krb5.conf file. Without these configuration changes, NFS clients and servers are unable to authenticate to each other, and attempts to mount NFS filesystems may fail. The client's rpc.gssd and the server's rpc.svcgssd daemons may log errors indicating that DES encryption types are not permitted. Copy the keytab from the IdM server to the NFS server. For example, if the IdM and NFS servers are on different machines: [root@ipaclient ~]# scp /tmp/krb5.keytab [email protected]:/etc/krb5.keytab Copy the keytab from the IdM server to the IdM client. For example: [root@ipaclient ~]# scp /tmp/krb5.keytab [email protected]:/etc/krb5.keytab Configure the /etc/exports file on the NFS server. /ipashare gss/krb5p(rw,no_root_squash,subtree_check,fsid=0) On the client, mount the NFS share. Always specify the share as nfs_server:/ /mountpoint . Use the same -o sec setting as is used in the /etc/exports file for the NFS server. [root@client ~]# mount -v -t nfs4 -o sec=krb5p nfs.example.com:/ /mnt/ipashare 5.4.2. Other Examples of Adding a Host Entry Section 5.4.1, "Setting up an IdM Client (Full Procedure)" covers the full procedure for configuring an IdM client manually. One of those steps is creating a host entry, and there are several different ways and options to perform that. 5.4.2.1. Adding Host Entries from the Web UI Open the Identity tab, and select the Hosts subtab. Click the Add link at the top of the hosts list. Fill in the machine name and select the domain from the configured zones in the drop-down list. If the host has already been assigned a static IP address, then include that with the host entry so that the DNS entry is fully created. DNS zones can be created in IdM, which is described in Section 17.6.1, "Adding Forward DNS Zones" . If the IdM server does not manage the DNS server, the zone can be entered manually in the menu area, like a regular text field. Note Select the Force checkbox to add the host DNS record, even if the hostname cannot be resolved. This is useful for hosts which use DHCP and do not have a static IP address. This essentially creates a placeholder entry in the IdM DNS service. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Click the Add and Edit button to go directly to the expanded entry page and fill in more attribute information. Information about the host hardware and physical location can be included with the host entry. 5.4.2.2. Adding Host Entries from the Command Line Host entries are created using the host-add command. This commands adds the host entry to the IdM Directory Server. The full list of options with host-add are listed in the ipa host manpage. At its most basic, an add operation only requires the client hostname to add the client to the Kerberos realm and to create an entry in the IdM LDAP server: If the IdM server is configured to manage DNS, then the host can also be added to the DNS resource records using the --ip-address and --force options. Example 5.6. Creating Host Entries with Static IP Addresses Commonly, hosts may not have a static IP address or the IP address may not be known at the time the client is configured. For example, laptops may be preconfigured as Identity Management clients, but they do not have IP addresses at the time they're configured. Hosts which use DHCP can still be configured with a DNS entry by using --force . This essentially creates a placeholder entry in the IdM DNS service. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Example 5.7. Creating Host Entries with DHCP Host records are deleted using the host-del command. If the IdM domain uses DNS, then the --updatedns option also removes the associated records of any kind for the host from the DNS.
[ "yum install ipa-admintools", "[jsmith@client ~]USD kinit admin [jsmith@client ~]USD ipa host-add --force --ip-address=192.168.166.31 ipaclient.example.com", "[jsmith@client ~]USD kinit admin", "[jsmith@client ~]USD ipa host-add-managedby --hosts=server.example.com ipaclient.example.com", "[jsmith@client ~]USD ipa-getkeytab -s server.example.com -p host/ipaclient.example.com -k /tmp/ipaclient.keytab", "chown root:root /etc/krb5.keytab chmod 0600 /etc/krb5.keytab", "chcon system_u:object_r:krb5_keytab_t:s0 /etc/krb5.keytab", "touch /etc/sssd/sssd.conf vim /etc/sssd/sssd.conf [sssd] config_file_version = 2 services = nss, pam domains = example.com [nss] [pam] [domain/example.com] cache_credentials = True krb5_store_password_if_offline = True ipa_domain = example.com id_provider = ipa auth_provider = ipa access_provider = ipa ipa_hostname = ipaclient.example.com chpass_provider = ipa ipa_server = server.example.com ldap_tls_cacert = /etc/ipa/ca.crt", "vim /etc/nsswitch.conf passwd: files sss shadow: files sss group: files sss netgroup: files sss", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false rdns = false ticket_lifetime = 24h forwardable = yes allow_weak_crypto = true [realms] EXAMPLE.COM = { kdc = server.example.com:88 admin_server = server.example.com:749 default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM", "account [default=bad success=ok user_unknown=ignore] pam_sss.so session optional pam_sss.so", "auth sufficient pam_sss.so use_first_pass account [default=bad success=ok user_unknown=ignore] pam_sss.so password sufficient pam_sss.so use_authtok session optional pam_sss.so", "auth sufficient pam_sss.so use_first_pass account [default=bad success=ok user_unknown=ignore] pam_sss.so password sufficient pam_sss.so use_authtok session optional pam_sss.so", "account [default=bad success=ok user_unknown=ignore] pam_sss.so session optional pam_sss.so", "wget -O /etc/ipa/ca.crt http://ipa.example.com/ipa/config/ca.crt", "certutil -A -d /etc/pki/nssdb -n \"IPA CA\" -t CT,C,C -a -i /etc/ipa/ca.crt", "service certmonger start", "chkconfig certmonger on", "ipa-getcert request -d /etc/pki/nssdb -n Server-Cert -K HOST/ipaclient.example.com -N 'CN=ipaclient.example.com,O=EXAMPLE.COM'", "RPCGSSDARGS=\"-vvv\" RPCSVCGSSDARGS=\"-vvv\"", "ipa service-add nfs/ipaclient.example.com@EXAMPLE", "ipa-getkeytab -s server.example.com -p nfs/ipaclient.example.com@EXAMPLE -k /tmp/krb5.keytab", "scp /tmp/krb5.keytab [email protected]:/etc/krb5.keytab", "scp /tmp/krb5.keytab [email protected]:/etc/krb5.keytab", "/ipashare gss/krb5p(rw,no_root_squash,subtree_check,fsid=0)", "mount -v -t nfs4 -o sec=krb5p nfs.example.com:/ /mnt/ipashare", "ipa host-add client1.example.com", "ipa host-add --force --ip-address=192.168.166.31 client1.example.com", "ipa host-add --force client1.example.com", "ipa host-del --updatedns client1.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/linux-manual
Chapter 2. Deploying OpenShift sandboxed containers on bare metal
Chapter 2. Deploying OpenShift sandboxed containers on bare metal You can deploy OpenShift sandboxed containers on an on-premise bare-metal cluster with Red Hat Enterprise Linux CoreOS (RHCOS) installed on the worker nodes. Note RHEL nodes are not supported. Nested virtualization is not supported. You can use any installation method including user-provisioned , installer-provisioned , or Assisted Installer to deploy your cluster. You can also install OpenShift sandboxed containers on Amazon Web Services (AWS) bare-metal instances. Bare-metal instances offered by other cloud providers are not supported. Cluster requirements You have installed Red Hat OpenShift Container Platform 4.14 or later on the cluster where you are installing the OpenShift sandboxed containers Operator. Your cluster has at least one worker node. 2.1. OpenShift sandboxed containers resource requirements You must ensure that your cluster has sufficient resources. OpenShift sandboxed containers lets users run workloads on their OpenShift Container Platform clusters inside a sandboxed runtime (Kata). Each pod is represented by a virtual machine (VM). Each VM runs in a QEMU process and hosts a kata-agent process that acts as a supervisor for managing container workloads, and the processes running in those containers. Two additional processes add more overhead: containerd-shim-kata-v2 is used to communicate with the pod. virtiofsd handles host file system access on behalf of the guest. Each VM is configured with a default amount of memory. Additional memory is hot-plugged into the VM for containers that explicitly request memory. A container running without a memory resource consumes free memory until the total memory used by the VM reaches the default allocation. The guest and its I/O buffers also consume memory. If a container is given a specific amount of memory, then that memory is hot-plugged into the VM before the container starts. When a memory limit is specified, the workload is terminated if it consumes more memory than the limit. If no memory limit is specified, the kernel running on the VM might run out of memory. If the kernel runs out of memory, it might terminate other processes on the VM. Default memory sizes The following table lists some the default values for resource allocation. Resource Value Memory allocated by default to a virtual machine 2Gi Guest Linux kernel memory usage at boot ~110Mi Memory used by the QEMU process (excluding VM memory) ~30Mi Memory used by the virtiofsd process (excluding VM I/O buffers) ~10Mi Memory used by the containerd-shim-kata-v2 process ~20Mi File buffer cache data after running dnf install on Fedora ~300Mi* [1] File buffers appear and are accounted for in multiple locations: In the guest where it appears as file buffer cache. In the virtiofsd daemon that maps allowed user-space file I/O operations. In the QEMU process as guest memory. Note Total memory usage is properly accounted for by the memory utilization metrics, which only count that memory once. Pod overhead describes the amount of system resources that a pod on a node uses. You can get the current pod overhead for the Kata runtime by using oc describe runtimeclass kata as shown below. Example USD oc describe runtimeclass kata Example output kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: "500Mi" cpu: "500m" You can change the pod overhead by changing the spec.overhead field for a RuntimeClass . For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs. Note The specified default overhead values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues. When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process. For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is used in all three cases. Therefore, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics. 2.2. Deploying OpenShift sandboxed containers by using the web console You can deploy OpenShift sandboxed containers on bare metal by using the OpenShift Container Platform web console to perform the following tasks: Install the OpenShift sandboxed containers Operator. Optional: Install the Node Feature Discovery (NFD) Operator to configure node eligibility checks. For more information, see node eligibility checks and the NFD Operator documentation . Create the KataConfig custom resource. Configure the OpenShift sandboxed containers workload objects. 2.2.1. Installing the OpenShift sandboxed containers Operator You can install the OpenShift sandboxed containers Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the web console, navigate to Operators OperatorHub . In the Filter by keyword field, type OpenShift sandboxed containers . Select the OpenShift sandboxed containers Operator tile and click Install . On the Install Operator page, select stable from the list of available Update Channel options. Verify that Operator recommended Namespace is selected for Installed Namespace . This installs the Operator in the mandatory openshift-sandboxed-containers-operator namespace. If this namespace does not yet exist, it is automatically created. Note Attempting to install the OpenShift sandboxed containers Operator in a namespace other than openshift-sandboxed-containers-operator causes the installation to fail. Verify that Automatic is selected for Approval Strategy . Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available. Click Install . Navigate to Operators Installed Operators to verify that the Operator is installed. Additional resources Using Operator Lifecycle Manager on restricted networks . Configuring proxy support in Operator Lifecycle Manager for disconnected environments. 2.2.2. Creating the KataConfig custom resource You must create the KataConfig custom resource (CR) to install kata as a RuntimeClass on your worker nodes. The kata runtime class is installed on all worker nodes by default. If you want to install kata on specific nodes, you can add labels to those nodes and then define the label in the KataConfig CR. OpenShift sandboxed containers installs kata as a secondary, optional runtime on the cluster and not as the primary runtime. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. The following factors might increase the reboot time: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard disk drive rather than an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU and network. Prerequisites You have access to the cluster as a user with the cluster-admin role. Optional: You have installed the Node Feature Discovery Operator if you want to enable node eligibility checks. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Select the OpenShift sandboxed containers Operator. On the KataConfig tab, click Create KataConfig . Enter the following details: Name : Optional: The default name is example-kataconfig . Labels : Optional: Enter any relevant, identifying attributes to the KataConfig resource. Each label represents a key-value pair. checkNodeEligibility : Optional: Select to use the Node Feature Discovery Operator (NFD) to detect node eligibility. kataConfigPoolSelector . Optional: To install kata on selected nodes, add a match expression for the labels on the selected nodes: Expand the kataConfigPoolSelector area. In the kataConfigPoolSelector area, expand matchExpressions . This is a list of label selector requirements. Click Add matchExpressions . In the Key field, enter the label key the selector applies to. In the Operator field, enter the key's relationship to the label values. Valid operators are In , NotIn , Exists , and DoesNotExist . Expand the Values area and then click Add value . In the Value field, enter true or false for key label value. logLevel : Define the level of log data retrieved for nodes with the kata runtime class. Click Create . The KataConfig CR is created and installs the kata runtime class on the worker nodes. Wait for the kata installation to complete and the worker nodes to reboot before verifying the installation. Verification On the KataConfig tab, click the KataConfig CR to view its details. Click the YAML tab to view the status stanza. The status stanza contains the conditions and kataNodes keys. The value of status.kataNodes is an array of nodes, each of which lists nodes in a particular state of kata installation. A message appears each time there is an update. Click Reload to refresh the YAML. When all workers in the status.kataNodes array display the values installed and conditions.InProgress: False with no specified reason, the kata is installed on the cluster. Additional resources KataConfig status messages 2.2.3. Configuring workload objects You must configure OpenShift sandboxed containers workload objects by setting kata as the runtime class for the following pod-templated objects: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Important Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources. Prerequisites You have created the KataConfig custom resource (CR). Procedure In the OpenShift Container Platform web console, navigate to Workloads workload type, for example, Pods . On the workload type page, click an object to view its details. Click the YAML tab. Add spec.runtimeClassName: kata to the manifest of each pod-templated workload object as in the following example: apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata # ... OpenShift Container Platform creates the workload object and begins scheduling it. Verification Inspect the spec.runtimeClassName field of a pod-templated object. If the value is kata , then the workload is running on OpenShift sandboxed containers, using peer pods. 2.3. Deploying OpenShift sandboxed containers by using the command line You can deploy OpenShift sandboxed containers on bare metal by using the command line interface (CLI) to perform the following tasks: Install the OpenShift sandboxed containers Operator. After installing the Operator, you can configure the following options: Configure a block storage device. Install the Node Feature Discovery (NFD) Operator to configure node eligibility checks. For more information, see node eligibility checks and the NFD Operator documentation . Create a NodeFeatureDiscovery custom resource. Create the KataConfig custom resource. Optional: Modify the pod overhead. Configure the OpenShift sandboxed containers workload objects. 2.3.1. Installing the OpenShift sandboxed containers Operator You can install the OpenShift sandboxed containers Operator by using the CLI. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Create an osc-namespace.yaml manifest file: apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator Create the namespace by running the following command: USD oc apply -f osc-namespace.yaml Create an osc-operatorgroup.yaml manifest file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator Create the operator group by running the following command: USD oc apply -f osc-operatorgroup.yaml Create an osc-subscription.yaml manifest file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.8.1 Create the subscription by running the following command: USD oc apply -f osc-subscription.yaml Verify that the Operator is correctly installed by running the following command: USD oc get csv -n openshift-sandboxed-containers-operator This command can take several minutes to complete. Watch the process by running the following command: USD watch oc get csv -n openshift-sandboxed-containers-operator Example output NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.8.1 1.7.0 Succeeded Additional resources Using Operator Lifecycle Manager on restricted networks . Configuring proxy support in Operator Lifecycle Manager for disconnected environments. 2.3.2. Optional configurations You can configure the following options after you install the OpenShift sandboxed containers Operator. 2.3.2.1. Provisioning local block volumes You can use local block volumes with OpenShift sandboxed containers. You must first provision the local block volumes by using the Local Storage Operator (LSO). Then you must enable the nodes with the local block volumes to run OpenShift sandboxed containers workloads. You can provision local block volumes for OpenShift sandboxed containers by using the Local Storage Operator (LSO). The local volume provisioner looks for any block volume devices at the paths specified in the defined resource. Prerequisites You have installed the Local Storage Operator. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so creates multiple persistent volumes (PVs). Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 This setting defines whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator provisioning. No other data besides signatures is erased. The default is "false" ( wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. 5 The path containing a list of local storage devices to choose from. You must use this path when enabling a node with a local block device to run OpenShift sandboxed containers workloads. 6 Replace this value with the filepath to your LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc apply -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change existing persistent volumes because doing so might result in a destructive operation. 2.3.2.2. Enabling nodes to use a local block device You can configure nodes with a local block device to run OpenShift sandboxed containers workloads at the paths specified in the defined volume resource. Prerequisites You provisioned a block device using the Local Storage Operator (LSO). Procedure Enable each node with a local block device to run OpenShift sandboxed containers workloads by running the following command: USD oc debug node/worker-0 -- chcon -vt container_file_t /host/path/to/device The /path/to/device must be the same path you defined when creating the local storage resource. Example output system_u:object_r:container_file_t:s0 /host/path/to/device 2.3.2.3. Creating a NodeFeatureDiscovery custom resource You create a NodeFeatureDiscovery custom resource (CR) to define the configuration parameters that the Node Feature Discovery (NFD) Operator checks to determine that the worker nodes can support OpenShift sandboxed containers. Note To install the kata runtime on only selected worker nodes that you know are eligible, apply the feature.node.kubernetes.io/runtime.kata=true label to the selected nodes and set checkNodeEligibility: true in the KataConfig CR. To install the kata runtime on all worker nodes, set checkNodeEligibility: false in the KataConfig CR. In both these scenarios, you do not need to create the NodeFeatureDiscovery CR. You should only apply the feature.node.kubernetes.io/runtime.kata=true label manually if you are sure that the node is eligible to run OpenShift sandboxed containers. The following procedure applies the feature.node.kubernetes.io/runtime.kata=true label to all eligible nodes and configures the KataConfig resource to check for node eligibility. Prerequisites You have installed the NFD Operator. Procedure Create an nfd.yaml manifest file according to the following example: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-kata namespace: openshift-nfd spec: workerConfig: configData: | sources: custom: - name: "feature.node.kubernetes.io/runtime.kata" matchOn: - cpuId: ["SSE4", "VMX"] loadedKMod: ["kvm", "kvm_intel"] - cpuId: ["SSE4", "SVM"] loadedKMod: ["kvm", "kvm_amd"] # ... Create the NodeFeatureDiscovery CR: USD oc create -f nfd.yaml The NodeFeatureDiscovery CR applies the feature.node.kubernetes.io/runtime.kata=true label to all qualifying worker nodes. Create a kata-config.yaml manifest file according to the following example: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: true Create the KataConfig CR: USD oc create -f kata-config.yaml Verification Verify that qualifying nodes in the cluster have the correct label applied: USD oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true' Example output NAME STATUS ROLES AGE VERSION compute-3.example.com Ready worker 4h38m v1.25.0 compute-2.example.com Ready worker 4h35m v1.25.0 2.3.3. Creating the KataConfig custom resource You must create the KataConfig custom resource (CR) to install kata as a runtime class on your worker nodes. Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following: Install the needed RHCOS extensions, such as QEMU and kata-containers , on your RHCOS node. Ensure that the CRI-O runtime is configured with the correct runtime handlers. Create a RuntimeClass CR named kata with a default configuration. This enables users to configure workloads to use kata as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime. OpenShift sandboxed containers installs kata as a secondary, optional runtime on the cluster and not as the primary runtime. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard disk drive rather than an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU and network. Prerequisites You have access to the cluster as a user with the cluster-admin role. Optional: You have installed the Node Feature Discovery Operator if you want to enable node eligibility checks. Procedure Create an example-kataconfig.yaml manifest file according to the following example: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: false 1 logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 2 1 Optional: Set`checkNodeEligibility` to true to run node eligibility checks. 2 Optional: If you have applied node labels to install OpenShift sandboxed containers on specific nodes, specify the key and value. Create the KataConfig CR by running the following command: USD oc apply -f example-kataconfig.yaml The new KataConfig CR is created and installs kata as a runtime class on the worker nodes. Wait for the kata installation to complete and the worker nodes to reboot before verifying the installation. Monitor the installation progress by running the following command: USD watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p" When the status of all workers under kataNodes is installed and the condition InProgress is False without specifying a reason, the kata is installed on the cluster. 2.3.4. Modifying pod overhead Pod overhead describes the amount of system resources that a pod on a node uses. You can modify the pod overhead by changing the spec.overhead field for a RuntimeClass custom resource. For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs. When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process. For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is being used in all three cases. Therefore, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics. Note The default values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues. Procedure Obtain the RuntimeClass object by running the following command: USD oc describe runtimeclass kata Update the overhead.podFixed.memory and cpu values and save as the file as runtimeclass.yaml : kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: "500Mi" cpu: "500m" Apply the changes by running the following command: USD oc apply -f runtimeclass.yaml 2.3.5. Configuring workload objects You must configure OpenShift sandboxed containers workload objects by setting kata as the runtime class for the following pod-templated objects: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Important Do not deploy workloads in an Operator namespace. Create a dedicated namespace for these resources. Prerequisites You have created the KataConfig custom resource (CR). Procedure Add spec.runtimeClassName: kata to the manifest of each pod-templated workload object as in the following example: apiVersion: v1 kind: <object> # ... spec: runtimeClassName: kata # ... OpenShift Container Platform creates the workload object and begins scheduling it. Verification Inspect the spec.runtimeClassName field of a pod-templated object. If the value is kata , then the workload is running on OpenShift sandboxed containers, using peer pods.
[ "oc describe runtimeclass kata", "kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: \"500Mi\" cpu: \"500m\"", "apiVersion: v1 kind: <object> spec: runtimeClassName: kata", "apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator", "oc apply -f osc-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sandboxed-containers-operator-group namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator", "oc apply -f osc-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: stable installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.8.1", "oc apply -f osc-subscription.yaml", "oc get csv -n openshift-sandboxed-containers-operator", "watch oc get csv -n openshift-sandboxed-containers-operator", "NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.8.1 1.7.0 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block devicePaths: 5 - /path/to/device 6", "oc apply -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "oc debug node/worker-0 -- chcon -vt container_file_t /host/path/to/device", "system_u:object_r:container_file_t:s0 /host/path/to/device", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-kata namespace: openshift-nfd spec: workerConfig: configData: | sources: custom: - name: \"feature.node.kubernetes.io/runtime.kata\" matchOn: - cpuId: [\"SSE4\", \"VMX\"] loadedKMod: [\"kvm\", \"kvm_intel\"] - cpuId: [\"SSE4\", \"SVM\"] loadedKMod: [\"kvm\", \"kvm_amd\"]", "oc create -f nfd.yaml", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: true", "oc create -f kata-config.yaml", "oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true'", "NAME STATUS ROLES AGE VERSION compute-3.example.com Ready worker 4h38m v1.25.0 compute-2.example.com Ready worker 4h35m v1.25.0", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: false 1 logLevel: info kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 2", "oc apply -f example-kataconfig.yaml", "watch \"oc describe kataconfig | sed -n /^Status:/,/^Events/p\"", "oc describe runtimeclass kata", "kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: \"500Mi\" cpu: \"500m\"", "oc apply -f runtimeclass.yaml", "apiVersion: v1 kind: <object> spec: runtimeClassName: kata" ]
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/deploying-osc-bare-metal
Images
Images Red Hat OpenShift Service on AWS 4 Red Hat OpenShift Service on AWS Images. Red Hat OpenShift Documentation Team
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>.", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1", "BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y", "RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y", "FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile", "FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y", "RUN chgrp -R 0 /some/directory && chmod -R g=u /some/directory", "LABEL io.openshift.tags mongodb,mongodb24,nosql", "LABEL io.openshift.wants mongodb,redis", "LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support", "LABEL io.openshift.non-scalable true", "LABEL io.openshift.min-memory 16Gi LABEL io.openshift.min-cpu 4", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "s2i create <image_name> <destination_directory>", "IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := USD(shell command -v podman 2> /dev/null | echo docker) build: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME) . .PHONY: test test: USD{CONTAINER_ENGINE} build -t USD(IMAGE_NAME)-candidate . IMAGE_NAME=USD(IMAGE_NAME)-candidate test/run", "podman build -t <builder_image_name>", "docker build -t <builder_image_name>", "podman run <builder_image_name> .", "docker run <builder_image_name> .", "s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_", "podman run <output_application_image_name>", "docker run <output_application_image_name>", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc get serviceaccount default -o yaml", "apiVersion: v1 imagePullSecrets: - name: default-dockercfg-123456 - name: <pull_secret_name> kind: ServiceAccount metadata: annotations: openshift.io/internal-registry-pull-secret-ref: <internal_registry_pull_secret> creationTimestamp: \"2025-03-03T20:07:52Z\" name: default namespace: default resourceVersion: \"13914\" uid: 9f62dd88-110d-4879-9e27-1ffe269poe3 secrets: - name: <pull_secret_name>", "apiVersion: v1 kind: Pod metadata: name: <secure_pod_name> spec: containers: - name: <container_name> image: quay.io/my-private-image imagePullSecrets: - name: <pull_secret_name>", "apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: <example_workflow> spec: entrypoint: <main_task> imagePullSecrets: - name: <pull_secret_name>", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5", "<image-stream-name>@<image-id>", "origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest", "<imagestream name>:<tag>", "origin-ruby-sample:latest", "oc describe is/<image-name>", "oc describe is/python", "Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago", "oc describe istag/<image-stream>:<tag-name>", "oc describe istag/python:latest", "Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801", "oc get istag <image-stream-tag> -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "oc get istag busybox:latest -ojsonpath=\"{range .image.dockerImageManifests[*]}{.os}/{.architecture}{'\\n'}{end}\"", "linux/amd64 linux/arm linux/arm64 linux/386 linux/mips64le linux/ppc64le linux/riscv64 linux/s390x", "oc tag <image-name:tag1> <image-name:tag2>", "oc tag python:3.5 python:latest", "Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.", "oc describe is/python", "Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago", "oc tag <repository/image> <image-name:tag>", "oc tag docker.io/python:3.6.0 python:3.6", "Tag python:3.6 set to docker.io/python:3.6.0.", "oc tag <image-name:tag> <image-name:latest>", "oc tag python:3.6 python:latest", "Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.", "oc tag -d <image-name:tag>", "oc tag -d python:3.6", "Deleted tag default/python:3.6", "oc tag <repository/image> <image-name:tag> --scheduled", "oc tag docker.io/python:3.6.0 python:3.6 --scheduled", "Tag python:3.6 set to import docker.io/python:3.6.0 periodically.", "oc tag <repositiory/image> <image-name:tag>", "oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson", "oc import-image <imagestreamtag> --from=<image> --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --reference-policy=local --confirm", "--- Arch: <none> Manifests: linux/amd64 sha256:6e325b86566fafd3c4683a05a219c30c421fbccbf8d87ab9d20d4ec1131c3451 linux/arm64 sha256:d8fad562ffa75b96212c4a6dc81faf327d67714ed85475bf642729703a2b5bf6 linux/ppc64le sha256:7b7e25338e40d8bdeb1b28e37fef5e64f0afd412530b257f5b02b30851f416e1 ---", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='Legacy' --confirm", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --scheduled=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name> --import-mode='PreserveOriginal' --insecure=true", "oc import-image <multiarch-image-stream-tag> --from=<registry>/<project_name>/<image-name>", "oc import-image <multiarch_image_stream_tag> --from=<registry>/<project_name>/<image_name> --import-mode='PreserveOriginal'", "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false", "apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, # ]", "oc set triggers deploy/example --from-image=example:latest -c web", "apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"example:latest\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"container\\\")].image\"}]'", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.31.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.31.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.31.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.31.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.31.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.31.3", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal", "apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.31.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.31.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.31.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.31.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.31.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.31.3", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf", "oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>", "oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files", "wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml", "oc create -f <path_to_the_directory>/<file-name>.yaml", "rosa create cluster -cluster-name=<cluster_name> --sts --mode=auto --hosted-cp --operator-roles-prefix <operator_role_prefix> --oidc-config-id <id_of_oidc_configuration> --subnet-ids=<public_subnet_id>,<private_subnet_id> --registry-config-insecure-registries <insecure_registries> --registry-config-allowed-registries <allowed_registries> --registry-config-allowed-registries-for-import <registry_name:insecure> --registry-config-additional-trusted-ca <additional_trusted_ca_file>", "rosa describe cluster --cluster=<cluster_name>", "Name: rosa-hcp-test Domain Prefix: rosa-hcp-test Display Name: rosa-hcp-test ID: <cluster_hcp_id> External ID: <cluster_hcp_id> Control Plane: ROSA Service Hosted OpenShift Version: 4.Y.Z Channel Group: stable DNS: <dns> AWS Account: <aws_id> AWS Billing Account: <aws_id> API URL: <ocm_api> Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: SingleAZ Nodes: - Compute (desired): 2 - Compute (current): 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: /23 - Subnets: <subnet_ids> EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Worker-Role Operator IAM Roles: - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-capa-controller-manager - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-control-plane-operator - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-kms-provider - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-image-registry-installer-cloud-cred - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-cloud-network-config-controller-cloud Managed Policies: Yes State: ready Private: No Delete Protection: Disabled Created: Oct 01 2030 09:48:52 UTC User Workload Monitoring: Enabled OIDC Endpoint URL: https://<endpoint> (Managed) Audit Log Forwarding: Disabled External Authentication: Disabled Etcd Encryption: Disabled Registry Configuration: - Allowed Registries: <allowed_registry> 1 2 - Insecure Registries: <insecure_registry> 3 - Allowed Registries for Import: 4 - Domain Name: <domain_name> 5 - Insecure: true 6 - Platform Allowlist: <platform_allowlist_id> 7 - Registries: <list_of_registries> 8 - Additional Trusted CA: 9 - <registry_name> : REDACTED", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.31.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.31.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.31.3", "rosa edit cluster --registry-config-insecure-registries <insecure_registries> --registry-config-allowed-registries <allowed_registries> --registry-config-allowed-registries-for-import <registry_name:insecure> --registry-config-additional-trusted-ca <additional_trusted_ca_file>", "? Changing any registry related parameter will trigger a rollout across all machinepools (all machinepool nodes will be recreated, following pod draining from each node). Do you want to proceed? Yes I: Updated cluster '<cluster_name>'", "rosa describe cluster --cluster=<cluster_name>", "Name: rosa-hcp-test Domain Prefix: rosa-hcp-test Display Name: rosa-hcp-test ID: <cluster_hcp_id> External ID: <cluster_hcp_id> Control Plane: ROSA Service Hosted OpenShift Version: 4.Y.Z Channel Group: stable DNS: <dns> AWS Account: <aws_id> AWS Billing Account: <aws_id> API URL: <ocm_api> Console URL: Region: us-east-1 Availability: - Control Plane: MultiAZ - Data Plane: SingleAZ Nodes: - Compute (desired): 2 - Compute (current): 2 Network: - Type: OVNKubernetes - Service CIDR: <service_cidr> - Machine CIDR: <machine_cidr> - Pod CIDR: <pod_cidr> - Host Prefix: /23 - Subnets: <subnet_ids> EC2 Metadata Http Tokens: optional Role (STS) ARN: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Installer-Role Support Role ARN: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Support-Role Instance IAM Roles: - Worker: arn:aws:iam::<aws_id>:role/<account_roles_prefix>-HCP-ROSA-Worker-Role Operator IAM Roles: - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-capa-controller-manager - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-control-plane-operator - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-kube-system-kms-provider - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-image-registry-installer-cloud-cred - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_id>:role/<operator_roles_prefix>-openshift-cloud-network-config-controller-cloud Managed Policies: Yes State: ready Private: No Delete Protection: Disabled Created: Oct 01 2030 09:48:52 UTC User Workload Monitoring: Enabled OIDC Endpoint URL: https://<endpoint> (Managed) Audit Log Forwarding: Disabled External Authentication: Disabled Etcd Encryption: Disabled Registry Configuration: - Allowed Registries: <allowed_registry> 1 2 - Insecure Registries: <insecure_registry> 3 - Allowed Registries for Import: 4 - Domain Name: <domain_name> 5 - Insecure: true 6 - Platform Allowlist: <platform_allowlist_id> 7 - Registries: <list_of_registries> 8 - Additional Trusted CA: 9 - <registry_name> : REDACTED", "rosa edit cluster --registry-config-platform-allowlist <newID>", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html-single/images/index
4.12. Fence Virt (Multicast Mode)
4.12. Fence Virt (Multicast Mode) Table 4.13, "Fence virt (Multicast Mode) " lists the fence device parameters used by fence_xvm , the fence agent for virtual machines using multicast. Table 4.13. Fence virt (Multicast Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Timeout timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-virt-multicast-ca
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6 Red Hat build of Keycloak introduces key changes to how applications are using some of the Red Hat Single Sign-On 7.6 Client Adapters. In addition to no longer releasing some client adapters, Red Hat build of Keycloak also introduces fixes and improvements that impact how client applications use OpenID Connect and SAML protocols. In this chapter, you will find the instructions to address these changes and migrate your application to integrate with Red Hat build of Keycloak . 5.1. Migrating OpenID Connect Clients The following Java Client OpenID Connect Adapters are no longer released starting with this release of Red Hat build of Keycloak Red Hat JBoss Enterprise Application Platform 6.x Red Hat JBoss Enterprise Application Platform 7.x Spring Boot Red Hat Fuse Compared to when these adapters were first released, OpenID Connect is now widely available across the Java Ecosystem. Also, much better interoperability and support is achieved by using the capabilities available from the technology stack, such as your application server or framework. These adapters have reached their end of life and are only available from Red Hat Single Sign-On 7.6. It is highly recommended to look for alternatives to keep your applications updated with the latest updates from OAuth2 and OpenID connect protocols. 5.1.1. Key changes in OpenID Connect protocol and client settings 5.1.1.1. Access Type client option no longer available When you create or update an OpenID Connect client, Access Type is no longer available. However, you can use other methods to achieve this capability. To achieve the Bearer Only capability, create a client with no authentication flow. In the Capability config section of the client details, make sure that no flow is selected. The client cannot obtain any tokens from Keycloak, which is equivalent to using the Bearer Only access type. To achieve the Public capability, make sure that client authentication is disabled for this client and at least one flow is enabled. To achieve Confidential capability, make sure that Client Authentication is enabled for the client and at least one flow is enabled. The boolean flags bearerOnly and publicClient still exist on the client JSON object. They can be used when creating or updating a client by the admin REST API or when importing this client by partial import or realm import. However, these options are not directly available in the Admin Console v2. 5.1.1.2. Changes in validating schemes for valid redirect URIs If an application client is using non http(s) custom schemes, the validation now requires that a valid redirect pattern explicitly allows that scheme. Example patterns for allowing custom scheme are custom:/test, custom:/test/* or custom:. For security reasons, a general pattern such as * no longer covers them. 5.1.1.3. Support for the client_id parameter in OpenID Connect Logout Endpoint Support for the client_id parameter, which is based on the OIDC RP-Initiated Logout 1.0 specification. This capability is useful to detect what client should be used for Post Logout Redirect URI verification in case that id_token_hint parameter cannot be used. The logout confirmation screen still needs to be displayed to the user when only the client_id parameter is used without parameter id_token_hint , so clients are encouraged to use id_token_hint parameter if they do not want the logout confirmation screen to be displayed to the user. 5.1.2. Valid Post Logout Redirect URIs The Valid Post Logout Redirect URIs configuration option is added to the OIDC client and is aligned with the OIDC specification. You can use a different set of redirect URIs for redirection after login and logout. The value + used for Valid Post Logout Redirect URIs means that the logout uses the same set of redirect URIs as specified by the option of Valid Redirect URIs . This change also matches the default behavior when migrating from a version due to backwards compatibility. 5.1.2.1. UserInfo Endpoint Changes 5.1.2.1.1. Error response changes The UserInfo endpoint is now returning error responses fully compliant with RFC 6750 (The OAuth 2.0 Authorization Framework: Bearer Token Usage). Error code and description (if available) are provided as WWW-Authenticate challenge attributes rather than JSON object fields. The responses will be the following, depending on the error condition: In case no access token is provided: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm" In case several methods are used simultaneously to provide an access token (for example, Authorization header + POST access_token parameter), or POST parameters are duplicated: 400 Bad Request WWW-Authenticate: Bearer realm="myrealm", error="invalid_request", error_description="..." In case an access token is missing openid scope: 403 Forbidden WWW-Authenticate: Bearer realm="myrealm", error="insufficient_scope", error_description="Missing openid scope" In case of inability to resolve cryptographic keys for UserInfo response signing/encryption: 500 Internal Server Error In case of a token validation error, a 401 Unauthorized is returned in combination with the invalid_token error code. This error includes user and client related checks and actually captures all the remaining error cases: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm", error="invalid_token", error_description="..." 5.1.2.1.2. Other Changes to the UserInfo endpoint It is now required for access tokens to have the openid scope, which is stipulated by UserInfo being a feature specific to OpenID Connect and not OAuth 2.0. If the openid scope is missing from the token, the request will be denied as 403 Forbidden . See the preceding section. UserInfo now checks the user status, and returns the invalid_token response if the user is disabled. 5.1.2.1.3. Change of the default Client ID mapper of Service Account Client. Default Client ID mapper of Service Account Client has been changed. Token Claim Name field value has been changed from clientId to client_id . client_id claim is compliant with OAuth2 specifications: JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens OAuth 2.0 Token Introspection OAuth 2.0 Token Exchange clientId userSession note still exists. 5.1.2.1.4. Added iss parameter to OAuth 2.0/OpenID Connect Authentication Response RFC 9207 OAuth 2.0 Authorization Server Issuer Identification specification adds the parameter iss in the OAuth 2.0/OpenID Connect Authentication Response for realizing secure authorization responses. In past releases, we did not have this parameter, but now Red Hat build of Keycloak adds this parameter by default, as required by the specification. However, some OpenID Connect / OAuth2 adapters, and especially older Red Hat build of Keycloak adapters, may have issues with this new parameter. For example, the parameter will be always present in the browser URL after successful authentication to the client application. In these cases, it may be useful to disable adding the iss parameter to the authentication response. This can be done for the particular client in the Admin Console, in client details in the section with OpenID Connect Compatibility Modes . You can enable Exclude Issuer From Authentication Response to prevent adding the iss parameter to the authentication response. 5.2. Migrating Red Hat JBoss Enterprise Application Platform applications 5.2.1. Red Hat JBoss Enterprise Application Platform 8.x Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider. Instead, you can leverage the OpenID Connect support from the JBoss EAP native OpenID Connect Client. For more information, take a look at OpenID Connect in JBoss EAP . The JBoss EAP native adapter relies on a configuration schema very similar to the Red Hat build of Keycloak Adapter JSON Configuration. For instance, a deployment using a keycloak.json configuration file can be mapped to the following configuration in JBoss EAP: { "realm": "quickstart", "auth-server-url": "http://localhost:8180", "ssl-required": "external", "resource": "jakarta-servlet-authz-client", "credentials": { "secret": "secret" } } For examples about integrating Jakarta-based applications using the JBoss EAP native adapter with Red Hat build of Keycloak, see the following examples at the Red Hat build of Keycloak Quickstart Repository: JAX-RS Resource Server Servlet Application It is strongly recommended to migrate to JBoss EAP native OpenID Connect client as it is the best candidate for Jakarta applications deployed to JBoss EAP 8 and newer. 5.2.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 24.0 server. 5.2.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it. 5.3. Migrating Spring Boot applications The Spring Framework ecosystem is evolving fast and you should have a much better experience by leveraging the OpenID Connect support already available there. Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider but rely on the comprehensive OAuth2/OpenID Connect support from Spring Security. For more information, see OAuth2/OpenID Connect support from Spring Security . In terms of capabilities, it provides a standard-based OpenID Connect client implementation. An example of a capability that you might want to review, if not already using the standard protocols, is Logout . Red Hat build of Keycloak provides full support for standard-based logout protocols from the OpenID Connect ecosystem. For examples of how to integrate Spring Security applications with Red Hat build of Keycloak, see the Quickstart Repository . If migrating from the Red Hat build of Keycloak Client Adapter for Spring Boot is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance only support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 24.0 server. 5.4. Migrating Red Hat Fuse applications As Red Hat Fuse has reached the end of full support, Red Hat build of Keycloak 24.0 will not provide any support for it. Red Hat Fuse adapters are still available with maintenance support through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 24.0 server. 5.5. Migrating Applications Using the Authorization Services Policy Enforcer To support integration with the Red Hat build of Keycloak Authorization Services, the policy enforcer is available separately from the Java Client Adapters. <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency> By decoupling it from the Java Client Adapters, it is possible now to integrate Red Hat build of Keycloak to any Java technology that provides built-in support for OAuth2 or OpenID Connect. The Red Hat build of Keycloak Policy Enforcer provides built-in support for the following types of applications: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Red Hat build of Keycloak Authorization Services For integration of the Red Hat build of Keycloak Policy Enforcer with different types of applications, consider the following examples: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Keycloak Authorization Services If migrating from the Red Hat Single Sign-On 7.6 Java Adapter you are using is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 24.0 server. Additional resources Policy enforcers 5.6. Migrating Single Page Applications (SPA) using the Red Hat build of Keycloak JS Adapter To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 24.0, which provides a more recent version of the adapter. Depending on how it is used, there are some minor changes needed, which are described below. 5.6.1. Legacy Promise API removed With this release, the legacy Promise API methods from the Red Hat build of Keycloak JS adapter is removed. This means that calling .success() and .error() on promises returned from the adapter is no longer possible. 5.6.2. Required to be instantiated with the new operator In a release, deprecation warnings were logged when the Red Hat build of Keycloak JS adapter is constructed without the new operator. Starting with this release, doing so will throw an exception instead. This change is to align with the expected behavior of JavaScript classes , which will allow further refactoring of the adapter in the future. To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 24.0, which provides a more recent version of the adapter. 5.7. Migrating SAML applications 5.7.1. Migrating Red Hat JBoss Enterprise Application Platform applications 5.7.1.1. Red Hat JBoss Enterprise Application Platform 8.x Red Hat build of Keycloak 24.0 includes client adapters for Red Hat JBoss Enterprise Application Platform 8.x, including support for Jakarta EE. 5.7.1.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 24.0 server. 5.7.1.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it.. 5.7.2. Key changes in SAML protocol and client settings 5.7.2.1. SAML SP metadata changes Prior to this release, SAML SP metadata contained the same key for both signing and encryption use. Starting with this version of Keycloak, we include only encryption intended realm keys for encryption use in SP metadata. For each encryption key descriptor we also specify the algorithm that it is supposed to be used with. The following table shows the supported XML-Enc algorithms with the mapping to Red Hat build of Keycloak realm keys. XML-Enc algorithm Realm key algorithm rsa-oaep-mgf1p RSA-OAEP rsa-1_5 RSA1_5 Additional resources Keycloak Upgrading Guide 5.7.2.2. Deprecated RSA_SHA1 and DSA_SHA1 algorithms for SAML Algorithms RSA_SHA1 and DSA_SHA1 , which can be configured as Signature algorithms on SAML adapters, clients and identity providers are deprecated. We recommend to use safer alternatives based on SHA256 or SHA512 . Also, verifying signatures on signed SAML documents or assertions with these algorithms do not work on Java 17 or higher. If you use this algorithm and the other party consuming your SAML documents is running on Java 17 or higher, verifying signatures will not work. The possible workaround is to remove algorithms such as the following: http://www.w3.org/2000/09/xmldsig#rsa-sha1 or http://www.w3.org/2000/09/xmldsig#dsa-sha1 from the list "disallowed algorithms" configured on property jdk.xml.dsig.secureValidationPolicy in the file USDJAVA_HOME/conf/security/java.security
[ "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\"", "400 Bad Request WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_request\", error_description=\"...\"", "403 Forbidden WWW-Authenticate: Bearer realm=\"myrealm\", error=\"insufficient_scope\", error_description=\"Missing openid scope\"", "500 Internal Server Error", "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_token\", error_description=\"...\"", "{ \"realm\": \"quickstart\", \"auth-server-url\": \"http://localhost:8180\", \"ssl-required\": \"external\", \"resource\": \"jakarta-servlet-authz-client\", \"credentials\": { \"secret\": \"secret\" } }", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/migration_guide/migrating-applications
Chapter 3. Manually creating IAM for Azure
Chapter 3. Manually creating IAM for Azure In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-set" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 3.3. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure
[ "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "grep \"release.openshift.io/feature-set\" *", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade", "openshift-install create cluster --dir <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/manually-creating-iam-azure
Chapter 5. Quay.io organizations overview
Chapter 5. Quay.io organizations overview In = Quay.io an organization is a grouping of users, repositories, and teams. It provides a means to organize and manage access control and permissions within the registry. With organizations, administrators can assign roles and permissions to users and teams. Other useful information about organizations includes the following: You cannot have an organization embedded within another organization. To subdivide an organization, you use teams. Organizations cannot contain users directly. You must first add a team, and then add one or more users to each team. Note Individual users can be added to specific repositories inside of an organization. Consequently, those users are not members of any team on the Repository Settings page. The Collaborators View on the Teams and Memberships page shows users who have direct access to specific repositories within the organization without needing to be part of that organization specifically. Teams can be set up in organizations as just members who use the repositories and associated images, or as administrators with special privileges for managing the Organization. Users can create their own organization to share repositories of container images. This can be done through the Quay.io UI. 5.1. Creating an organization by using the UI Use the following procedure to create a new organization by using the UI. Procedure Log in to your Red Hat Quay registry. Click Organization in the navigation pane. Click Create Organization . Enter an Organization Name , for example, testorg . Enter an Organization Email . Click Create . Now, your example organization should populate under the Organizations page. 5.2. Organization settings With = Quay.io, some basic organization settings can be adjusted by using the UI. This includes adjusting general settings, such as the e-mail address associated with the organization, and time machine settings, which allows administrators to adjust when a tag is garbage collected after it is permanently deleted. Use the following procedure to alter your organization settings by using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Settings tab. Optional. Enter the email address associated with the organization. Optional. Set the allotted time for the Time Machine feature to one of the following: A few seconds A day 7 days 14 days A month Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/organizations-overview
Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule
Chapter 3. Upgrading Red Hat Enterprise Linux on Satellite or Capsule Satellite and Capsule are supported on both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9. You can use the following methods to upgrade your Satellite or Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9: Leapp in-place upgrade With Leapp, you can upgrade your Satellite or Capsule in-place therefore it is faster but imposes a downtime on the services. Migration by using cloning The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You cannot use cloning for Capsule Server migrations. Migration by using backup and restore The Red Hat Enterprise Linux 8 system remains operational during the migration using cloning, which reduces the downtime. You can use backup and restore for migrating both Satellite and Capsule operating system from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9. 3.1. Upgrading Satellite or Capsule to RHEL 9 in-place by using Leapp You can use the Leapp tool to upgrade as well as to help detect and resolve issues that could prevent you from upgrading successfully. Prerequisites Review known issues before you begin an upgrade. For more information, see Known issues in Red Hat Satellite 6.16 . If you use an HTTP proxy in your environment, configure the Subscription Manager to use the HTTP proxy for connection. For more information, see Troubleshooting in Upgrading from RHEL 8 to RHEL 9 . Satellite 6.16 or Capsule 6.16 running on Red Hat Enterprise Linux 8. If you are upgrading Capsule Servers, enable and synchronize the following repositories to Satellite Server, and add them to the lifecycle environment and content view that is attached to your Capsule Server: Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) : rhel-9-for-x86_64-baseos-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-baseos-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor version for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) : rhel-9-for-x86_64-appstream-rpms for the major version: x86_64 9 . rhel-9-for-x86_64-appstream-rpms for the latest supported minor version: x86_64 9. Y , where Y represents the minor version. For information about the latest supported minor versions for in-place upgrades, see Supported upgrade paths in Upgrading from RHEL 8 to RHEL 9 . Red Hat Satellite Capsule 6.16 for RHEL 9 x86_64 RPMs : satellite-capsule-6.16-for-rhel-9-x86_64-rpms Red Hat Satellite Maintenance 6.16 for RHEL 9 x86_64 RPMs : satellite-maintenance-6.16-for-rhel-9-x86_64-rpms You require access to Red Hat Enterprise Linux and Satellite packages. Obtain the ISO files for Red Hat Enterprise Linux 9 and Satellite 6.16. For more information, see Downloading the Binary DVD Images in Installing Satellite Server in a disconnected network environment . Procedure Install required packages: Set up the required repositories to perform the upgrade in a disconnected environment. Important The required repositories cannot be served from a locally mounted ISO but must be delivered over the network from a different machine. Leapp completes part of the upgrade in a container that has no access to additional ISO mounts. Add the following lines to /etc/yum.repos.d/rhel9.repo : Add the following lines to /etc/yum.repos.d/satellite.repo: Let Leapp analyze your system: The first run will most likely report issues and inhibit the upgrade. Examine the report in the /var/log/leapp/leapp-report.txt file, answer all questions by using leapp answer , and manually resolve other reported problems. Run leapp preupgrade again and make sure that it does not report any more issues. Let Leapp create the upgrade environment: Reboot the system to start the upgrade. After the system reboots, a live system conducts the upgrade, reboots to fix SELinux labels and then reboots into the final Red Hat Enterprise Linux 9 system. Wait for Leapp to finish the upgrade. You can monitor the process with journalctl : Unlock packages: Verify the post-upgrade state. For more information, see Verifying the post-upgrade state in Upgrading from RHEL 8 to RHEL 9 . Perform post-upgrade tasks on the RHEL 9 system. For more information, see Performing post-upgrade tasks on the RHEL 9 system in Upgrading from RHEL 8 to RHEL 9 . Lock packages: Change SELinux to enforcing mode. For more information, see Changing SELinux mode to enforcing in Upgrading from RHEL 8 to RHEL 9 . Additional resources For more information on customizing the Leapp upgrade for your environment, see Customizing your Red Hat Enterprise Linux in-place upgrade . For more information, see How to in-place upgrade an offline / disconnected RHEL 8 machine to RHEL 9 with Leapp? 3.2. Migrating Satellite to RHEL 9 by using cloning You can clone your existing Satellite Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. Create a backup of the existing Satellite Server, which you then clone on the new Red Hat Enterprise Linux 9 system. Note You cannot use cloning for Capsule Server backups. Procedure Perform a full backup of your Satellite Server. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same configuration as the source server. This is the target server. Clone the server. Clone configures hostname for the target server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite 3.3. Migrating Satellite or Capsule to RHEL 9 using backup and restore You can migrate your existing Satellite Server and Capsule Server from Red Hat Enterprise Linux 8 to a freshly installed Red Hat Enterprise Linux 9 system. The migration involves creating a backup of the existing Satellite Server and Capsule Server, which you then restore on the new Red Hat Enterprise Linux 9 system. Procedure Perform a full backup of your Satellite Server or Capsule. This is the source Red Hat Enterprise Linux 8 server that you are migrating. For more information, see Performing a full backup of Satellite Server or Capsule Server in Administering Red Hat Satellite . Deploy a system with Red Hat Enterprise Linux 9 and the same hostname and configuration as the source server. This is the target server. Restore the backup. Restore does not significantly alter the target system and requires additional configuration. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite . Restore the Capsule Server backup. For more information, see Restoring Satellite Server or Capsule Server from a backup in Administering Red Hat Satellite .
[ "satellite-maintain packages install leapp leapp-upgrade-el8toel9", "[BaseOS] name=rhel-9-for-x86_64-baseos-rpms baseurl=http:// server.example.com /rhel9/BaseOS/ [AppStream] name=rhel-9-for-x86_64-appstream-rpms baseurl=http:// server.example.com /rhel9/AppStream/", "[satellite-6.16-for-rhel-9-x86_64-rpms] name=satellite-6.16-for-rhel-9-x86_64-rpms baseurl=http:// server.example.com /sat6/Satellite/ [satellite-maintenance-6.16-for-rhel-9-x86_64-rpms] name=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms baseurl=http:// server.example.com /sat6/Maintenance/", "leapp preupgrade --no-rhsm --enablerepo BaseOS --enablerepo AppStream --enablerepo satellite-6.16-for-rhel-9-x86_64-rpms --enablerepo satellite-maintenance-6.16-for-rhel-9-x86_64-rpms", "leapp upgrade --no-rhsm --enablerepo BaseOS --enablerepo AppStream --enablerepo satellite-6.16-for-rhel-9-x86_64-rpms --enablerepo satellite-maintenance-6.16-for-rhel-9-x86_64-rpms", "journalctl -u leapp_resume -f", "satellite-maintain packages unlock", "satellite-maintain packages lock" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_disconnected_red_hat_satellite_to_6.16/upgrading_EL_on_satellite_or_proxy_upgrading-disconnected
Key migration terminology
Key migration terminology While the following migration terms are commonly used in the software industry, these definitions are specific to Red Hat Enterprise Linux (RHEL). Update Sometimes called a software patch, an update is an addition to the current version of the application, operating system, or software that you are running. A software update addresses any issues or bugs to provide a better experience of working with the technology. In RHEL, an update relates to a minor release, for example, updating from RHEL 8.1 to 8.2. Upgrade An upgrade is when you replace the application, operating system, or software that you are currently running with a newer version. Typically, you first back up your data according to instructions from Red Hat. When you upgrade RHEL, you have two options: In-place upgrade: During an in-place upgrade, you replace the earlier version with the new version without removing the earlier version first. The installed applications and utilities, along with the configurations and preferences, are incorporated into the new version. Clean install: A clean install removes all traces of the previously installed operating system, system data, configurations, and applications and installs the latest version of the operating system. A clean install is ideal if you do not need any of the data or applications on your systems or if you are developing a new project that does not rely on prior builds. Operating system conversion A conversion is when you convert your operating system from a different Linux distribution to Red Hat Enterprise Linux. Typically, you first back up your data according to instructions from Red Hat. Migration Typically, a migration indicates a change of platform: software or hardware. Moving from Windows to Linux is a migration. Moving a user from one laptop to another or a company from one server to another is a migration. However, most migrations also involve upgrades, and sometimes the terms are used interchangeably. Migration to RHEL: Conversion of an existing operating system to RHEL Migration across RHEL: Upgrade from one version of RHEL to another
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility/ref_key-migration-terminology_converting-from-a-linux-distribution-to-rhel
Installing on OpenStack
Installing on OpenStack OpenShift Container Platform 4.13 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/index
probe::scheduler.kthread_stop
probe::scheduler.kthread_stop Name probe::scheduler.kthread_stop - A thread created by kthread_create is being stopped Synopsis scheduler.kthread_stop Values thread_pid PID of the thread being stopped thread_priority priority of the thread
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scheduler-kthread-stop
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/preface-ibm-z
Chapter 1. Red Hat OpenStack Platform high availability overview and planning
Chapter 1. Red Hat OpenStack Platform high availability overview and planning Red Hat OpenStack Platform (RHOSP) high availability (HA) is a collection of services that orchestrate failover and recovery for your deployment. When you plan your HA deployment, ensure that you review the considerations for different aspects of the environment, such as hardware assignments and network configuration. 1.1. Red Hat OpenStack Platform high availability services Red Hat OpenStack Platform (RHOSP) employs several technologies to provide the services required to implement high availability (HA). These services include Galera, RabbitMQ, Redis, HAProxy, individual services that Pacemaker manages, and Systemd and plain container services that Podman manages. 1.1.1. Service types Core container Core container services are Galera, RabbitMQ, Redis, and HAProxy. These services run on all Controller nodes and require specific management and constraints for the start, stop and restart actions. You use Pacemaker to launch, manage, and troubleshoot core container services. Note RHOSP uses the MariaDB Galera Cluster to manage database replication. Active-passive Active-passive services run on one Controller node at a time, and include services such as openstack-cinder-volume . To move an active-passive service, you must use Pacemaker to ensure that the correct stop-start sequence is followed. Systemd and plain container Systemd and plain container services are independent services that can withstand a service interruption. Therefore, if you restart a high availability service such as Galera, you do not need to manually restart any other service, such as nova-api . You can use systemd or Podman to directly manage systemd and plain container services. When orchestrating your HA deployment, director uses templates and Puppet modules to ensure that all services are configured and launched correctly. In addition, when troubleshooting HA issues, you must interact with services in the HA framework using the podman command or the systemctl command. 1.1.2. Service modes HA services can run in one of the following modes: Active-active Pacemaker runs the same service on multiple Controller nodes, and uses HAProxy to distribute traffic across the nodes or to a specific Controller with a single IP address. In some cases, HAProxy distributes traffic to active-active services with Round Robin scheduling. You can add more Controller nodes to improve performance. Important Active-active mode is supported only in distributed compute node (DCN) architecture at Edge sites. Active-passive Services that are unable to run in active-active mode must run in active-passive mode. In this mode, only one instance of the service is active at a time. For example, HAProxy uses stick-table options to direct incoming Galera database connection requests to a single back-end service. This helps prevent too many simultaneous connections to the same data from multiple Galera nodes. 1.2. Planning high availability hardware assignments When you plan hardware assignments, consider the number of nodes that you want to run in your deployment, as well as the number of Virtual Machine (vm) instances that you plan to run on Compute nodes. Controller nodes Most non-storage services run on Controller nodes. All services are replicated across the three nodes and are configured as active-active or active-passive services. A high availability (HA) environment requires a minimum of three nodes. Red Hat Ceph Storage nodes Storage services run on these nodes and provide pools of Red Hat Ceph Storage areas to the Compute nodes. A minimum of three nodes are required. Compute nodes Virtual machine (VM) instances run on Compute nodes. You can deploy as many Compute nodes as you need to meet your capacity requirements, as well as migration and reboot operations. You must connect Compute nodes to the storage network and to the project network to ensure that VMs can access storage nodes, VMs on other Compute nodes, and public networks. STONITH You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. Deploying a highly available overcloud without STONITH is not supported. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . 1.3. Planning high availability networking When you plan the virtual and physical networks, consider the provisioning network switch configuration and the external network switch configuration. In addition to the network configuration, you must deploy the following components: Provisioning network switch This switch must be able to connect the undercloud to all the physical computers in the overcloud. The NIC on each overcloud node that is connected to this switch must be able to PXE boot from the undercloud. The portfast parameter must be enabled. Controller/External network switch This switch must be configured to perform VLAN tagging for the other VLANs in the deployment. Allow only VLAN 100 traffic to external networks. Networking hardware and keystone endpoint To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api , ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network . 1.4. Accessing the high availability environment To investigate high availability (HA) nodes, use the stack user to log in to the overcloud nodes and run the openstack server list command to view the status and details of the nodes. Prerequisites High availability is deployed and running. Procedure In a running HA environment, log in to the undercloud as the stack user. Identify the IP addresses of your overcloud nodes: Log in to one of the overcloud nodes: Replace <node_ip> with the IP address of the node that you want to log in to. 1.5. Additional resources Chapter 2, Example deployment: High availability cluster with Compute and Ceph
[ "source ~/stackrc (undercloud) USD openstack server list +-------+------------------------+---+----------------------+---+ | ID | Name |...| Networks |...| +-------+------------------------+---+----------------------+---+ | d1... | overcloud-controller-0 |...| ctlplane=*10.200.0.11* |...|", "(undercloud) USD ssh tripleo-admin@<node_IP>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_ha-overview-planning_rhosp
Chapter 2. Installing Dev Spaces
Chapter 2. Installing Dev Spaces This section contains instructions to install Red Hat OpenShift Dev Spaces. You can deploy only one instance of OpenShift Dev Spaces per cluster. Section 2.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 2.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 2.1.4, "Installing Dev Spaces in a restricted environment" 2.1. Installing Dev Spaces in the cloud Deploy and run Red Hat OpenShift Dev Spaces in the cloud. Prerequisites A OpenShift cluster to deploy OpenShift Dev Spaces on. dsc : The command line tool for Red Hat OpenShift Dev Spaces. See: Section 1.2, "Installing the dsc management tool" . 2.1.1. Deploying OpenShift Dev Spaces in the cloud Follow the instructions below to start the OpenShift Dev Spaces Server in the cloud by using the dsc tool. Section 2.1.2, "Installing Dev Spaces on OpenShift using CLI" Section 2.1.3, "Installing Dev Spaces on OpenShift using the web console" Section 2.1.4, "Installing Dev Spaces in a restricted environment" https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#installing-che-on-microsoft-azure 2.1.2. Installing Dev Spaces on OpenShift using CLI You can install OpenShift Dev Spaces on OpenShift. Prerequisites OpenShift Container Platform An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the OpenShift Dev Spaces instance is removed: Create the OpenShift Dev Spaces instance: Verification steps Verify the OpenShift Dev Spaces instance status: Navigate to the OpenShift Dev Spaces cluster instance: 2.1.3. Installing Dev Spaces on OpenShift using the web console If you have trouble installing OpenShift Dev Spaces on the command line , you can install it through the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . For a repeat installation on the same OpenShift cluster: you uninstalled the OpenShift Dev Spaces instance according to Chapter 8, Uninstalling Dev Spaces . Procedure In the Administrator view of the OpenShift web console, go to Operators OperatorHub and search for Red Hat OpenShift Dev Spaces . Install the Red Hat OpenShift Dev Spaces Operator. Tip See Installing from OperatorHub using the web console . Caution The Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. This is required as the Operator Lifecycle Manager will attempt to install the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace, potentially resulting in two conflicting installations of the Dev Workspace Operator if the latter is installed in a different namespace. Caution If you want to onboard Web Terminal Operator on the cluster make sure to use the same installation namespace as Red Hat OpenShift Dev Spaces Operator since both depend on Dev Workspace Operator. Web Terminal Operator, Red Hat OpenShift Dev Spaces Operator, and Dev Workspace Operator must be installed in the same namespace. Create the openshift-devspaces project in OpenShift as follows: Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification Create CheCluster YAML view . In the YAML view , replace namespace: openshift-operators with namespace: openshift-devspaces . Select Create . Tip See Creating applications from installed Operators . Verification In Red Hat OpenShift Dev Spaces instance Specification , go to devspaces , landing on the Details tab. Under Message , check that there is None , which means no errors. Under Red Hat OpenShift Dev Spaces URL , wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard. In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status. 2.1.4. Installing Dev Spaces in a restricted environment On an OpenShift cluster operating in a restricted network, public resources are not available. However, deploying OpenShift Dev Spaces and running workspaces requires the following public resources: Operator catalog Container images Sample projects To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster. Prerequisites The OpenShift cluster has at least 64 GB of disk space. The OpenShift cluster is ready to operate on a restricted network. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks . An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . An active oc registry session to the registry.redhat.io Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication . opm . See Installing the opm CLI . jq . See Downloading jq . podman . See Podman Installation Instructions . skopeo version 1.6 or higher. See Installing Skopeo . An active skopeo session with administrative access to the private Docker registry. Authenticating to a registry , and Mirroring images for a disconnected installation . dsc for OpenShift Dev Spaces version 3.16. See Section 1.2, "Installing the dsc management tool" . Procedure Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh . 1 The private Docker registry where the images will be mirrored Install OpenShift Dev Spaces with the configuration set in the che-operator-cr-patch.yaml during the step: Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 3.8.1, "Configuring network policies" . Additional resources Red Hat-provided Operator catalogs Managing custom catalogs 2.1.4.1. Setting up an Ansible sample Follow these steps to use an Ansible sample in restricted environments. Prerequisites Microsoft Visual Studio Code - Open Source IDE A 64-bit x86 system. Procedure Mirror the following images: Configure the cluster proxy to allow access to the following domains: Note Support for the following IDE and CPU architectures is planned for a future release: IDE JetBrains IntelliJ IDEA Community Edition IDE ( Technology Preview ) CPU architectures IBM Power (ppc64le) IBM Z (s390x) 2.2. Finding the fully qualified domain name (FQDN) You can get the fully qualified domain name (FQDN) of your organization's instance of OpenShift Dev Spaces on the command line or in the OpenShift web console. Tip You can find the FQDN for your organization's OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to Operators Installed Operators Red Hat OpenShift Dev Spaces instance Specification devspaces Red Hat OpenShift Dev Spaces URL . Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Procedure Run the following command: oc get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'
[ "dsc server:delete", "dsc server:deploy --platform openshift", "dsc server:status", "dsc dashboard:open", "create namespace openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.16 --devworkspace_operator_version \"v0.30.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.16\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.16.0\" --my_registry \" <my_registry> \" 1", "dsc server:deploy --platform=openshift --olm-channel stable --catalog-source-name=devspaces-disconnected-install --catalog-source-namespace=openshift-marketplace --skip-devworkspace-operator --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml", "ghcr.io/ansible/ansible-workspace-env-reference@sha256:03d7f0fe6caaae62ff2266906b63d67ebd9cf6e4a056c7c0a0c1320e6cfbebce registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb", ".ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com", "get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/administration_guide/installing-devspaces
Part IV. Install
Part IV. Install
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/install
8.5.2. Adding a Cluster Service to the Cluster
8.5.2. Adding a Cluster Service to the Cluster To add a cluster service to the cluster, follow the steps in this section. Note The examples provided in this section show a cluster service in which all of the resources are at the same level. For information on defining a service in which there is a dependency chain in a resource hierarchy, as well as the rules that govern the behavior of parent and child resources, see Appendix C, HA Resource Behavior . Open /etc/cluster/cluster.conf at any node in the cluster. Add a service section within the rm element for each service. For example: Configure the following parameters (attributes) in the service element: autostart - Specifies whether to autostart the service when the cluster starts. Use '1' to enable and '0' to disable; the default is enabled. domain - Specifies a failover domain (if required). exclusive - Specifies a policy wherein the service only runs on nodes that have no other services running on them. recovery - Specifies a recovery policy for the service. The options are to relocate, restart, disable, or restart-disable the service. Depending on the type of resources you want to use, populate the service with global or service-specific resources For example, here is an Apache service that uses global resources: For example, here is an Apache service that uses service-specific resources: Example 8.10, " cluster.conf with Services Added: One Using Global Resources and One Using Service-Specific Resources " shows an example of a cluster.conf file with two services: example_apache - This service uses global resources web_fs , 127.143.131.100 , and example_server . example_apache2 - This service uses service-specific resources web_fs2 , 127.143.131.101 , and example_server2 . Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3"> ). Save /etc/cluster/cluster.conf . (Optional) Validate the updated file against the cluster schema ( cluster.rng ) by running the ccs_config_validate command. For example: Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. Verify that the updated configuration file has been propagated. Proceed to Section 8.9, "Verifying a Configuration" . Example 8.10. cluster.conf with Services Added: One Using Global Resources and One Using Service-Specific Resources
[ "<rm> <service autostart=\"1\" domain=\"\" exclusive=\"0\" name=\"\" recovery=\"restart\"> </service> </rm>", "<rm> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> <service autostart=\"1\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> </rm>", "<rm> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www2\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm>", "ccs_config_validate Configuration validates", "<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"apc\" passwd=\"password_example\"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name=\"example_pri\" nofailback=\"0\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"node-01.example.com\" priority=\"1\"/> <failoverdomainnode name=\"node-02.example.com\" priority=\"2\"/> <failoverdomainnode name=\"node-03.example.com\" priority=\"3\"/> </failoverdomain> </failoverdomains> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> <service autostart=\"1\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www2\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm> </cluster>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-config-add-service-cli-CA
Chapter 2. About migrating from OpenShift Container Platform 3 to 4
Chapter 2. About migrating from OpenShift Container Platform 3 to 4 OpenShift Container Platform 4 contains new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. OpenShift Container Platform 4 clusters are deployed and managed very differently from OpenShift Container Platform 3. The most effective way to migrate from OpenShift Container Platform 3 to 4 is by using a CI/CD pipeline to automate deployments in an application lifecycle management framework. If you do not have a CI/CD pipeline or if you are migrating stateful applications, you can use the Migration Toolkit for Containers (MTC) to migrate your application workloads. You can use Red Hat Advanced Cluster Management for Kubernetes to help you import and manage your OpenShift Container Platform 3 clusters easily, enforce policies, and redeploy your applications. Take advantage of the free subscription to use Red Hat Advanced Cluster Management to simplify your migration process. To successfully transition to OpenShift Container Platform 4, review the following information: Differences between OpenShift Container Platform 3 and 4 Architecture Installation and upgrade Storage, network, logging, security, and monitoring considerations About the Migration Toolkit for Containers Workflow File system and snapshot copy methods for persistent volumes (PVs) Direct volume migration Direct image migration Advanced migration options Automating your migration with migration hooks Using the MTC API Excluding resources from a migration plan Configuring the MigrationController custom resource for large-scale migrations Enabling automatic PV resizing for direct volume migration Enabling cached Kubernetes clients for improved performance For new features and enhancements, technical changes, and known issues, see the MTC release notes .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migrating_from_version_3_to_4/about-migrating-from-3-to-4
Chapter 9. Connecting to an instance
Chapter 9. Connecting to an instance You can access an instance from a location external to the cloud by using a remote shell such as SSH or WinRM, when you have allowed the protocol in the instance security group rules. You can also connect directly to the console of an instance, so that you can debug even if the network connection fails. Note If you did not provide a key pair to the instance, or allocate a security group to the instance, you can access the instance only from inside the cloud by using VNC. You cannot ping the instance. Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command, for example: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: 9.1. Accessing an instance console You can connect directly to the VNC console for an instance by entering the VNC console URL in a browser. Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure To display the VNC console URL for an instance, enter the following command: To connect directly to the VNC console, enter the displayed URL in a browser. 9.2. Logging in to an instance You can log in to public instances remotely. Prerequisites You have the key pair certificate for the instance. The certificate is downloaded when the key pair is created. If you did not create the key pair yourself, ask your administrator. The instance is configured as a public instance. For more information on the requirements of a public instance, see Providing public access to an instance . You have a cloud user account. The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Retrieve the floating IP address of the instance you want to log in to: Replace <instance> with the name or ID of the instance that you want to connect to. Use the automatically created cloud-user account to log in to your instance: Replace <keypair> with the name of the key pair. Replace <floating_ip> with the floating IP address of the instance. Tip You can use the following command to log in to an instance without the floating IP address: Replace <keypair> with the name of the key pair. Replace <instance> with the name or ID of the instance that you want to connect to.
[ "openstack flavor list --os-cloud <cloud_name>", "`export OS_CLOUD=<cloud_name>`", "openstack console url show <vm_name> +-------+------------------------------------------------------+ | Field | Value | +-------+------------------------------------------------------+ | type | novnc | | url | http://172.25.250.50:6080/vnc_auto.html?token= | | | 962dfd71-f047-43d3-89a5-13cb88261eb9 | +-------+-------------------------------------------------------+", "openstack server show <instance>", "ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD", "openstack server ssh --login cloud-user --identity ~/.ssh/<keypair>.pem --private <instance>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/assembly_connecting-to-an-instance_instances
Chapter 6. Subscription Management
Chapter 6. Subscription Management Migration from RHN Classic to certificate-based RHN Red Hat Enterprise Linux 6.3 includes a new tool to migrate RHN Classic customers to the certificate-based RHN. For more information, refer to the Red Hat Enterprise Linux 6 Subscription Management Guide . Subscription Manager gpgcheck behavior Subscription Manager now disables gpgcheck for any repositories it manages which have an empty gpgkey . To re-enable the repository, upload the GPG keys, and ensure that the correct URL is added to your custom content definition. Firstboot System Registration In Red Hat Enterprise Linux 6.3, during firstboot system registration, registering to Certificate-based Subscription Management is now the default option. Server side deletes System profiles are now unregistered when they are deleted from the Customer Portal so that they no longer check in with certificate-based RHN. Preferred service levels Subscription manager now allows users to associate a machine with a preferred Service Level which impacts the auto subscription and healing logic. For more information on service levels, refer to the Red Hat Enterprise Linux 6 Subscription Management Guide . Limiting updates to a specific minor release Subscription manager now allows a user to select a specific release (for example, Red Hat Enterprise Linux 6.2), which will lock a machine to that release. Prior to this update, there was no way to limit package updates in the event newer packages became available as part of a later minor release (for example, Red Hat Enterprise Linux 6.3).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/entitlement
Chapter 14. Volume cloning
Chapter 14. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 14.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-cloning_osp
Chapter 2. LVM Components
Chapter 2. LVM Components This chapter describes the components of an LVM Logical volume. 2.1. Physical Volumes The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume, the device must be initialized as a physical volume (PV). Initializing a block device as a physical volume places a label near the start of the device. By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors when you create the physical volume. This allows LVM volumes to co-exist with other users of these sectors, if necessary. An LVM label provides correct identification and device ordering for a physical device, since devices can come up in any order when the system is booted. An LVM label remains persistent across reboots and throughout a cluster. The LVM label identifies the device as an LVM physical volume. It contains a random unique identifier (the UUID) for the physical volume. It also stores the size of the block device in bytes, and it records where the LVM metadata will be stored on the device. The LVM metadata contains the configuration details of the LVM volume groups on your system. By default, an identical copy of the metadata is maintained in every metadata area in every physical volume within the volume group. LVM metadata is small and stored as ASCII. Currently LVM allows you to store 0, 1 or 2 identical copies of its metadata on each physical volume. The default is 1 copy. Once you configure the number of metadata copies on the physical volume, you cannot change that number at a later time. The first copy is stored at the start of the device, shortly after the label. If there is a second copy, it is placed at the end of the device. If you accidentally overwrite the area at the beginning of your disk by writing to a different disk than you intend, a second copy of the metadata at the end of the device will allow you to recover the metadata. For detailed information about the LVM metadata and changing the metadata parameters, see Appendix E, LVM Volume Group Metadata . 2.1.1. LVM Physical Volume Layout Figure 2.1, "Physical Volume layout" shows the layout of an LVM physical volume. The LVM label is on the second sector, followed by the metadata area, followed by the usable space on the device. Note In the Linux kernel (and throughout this document), sectors are considered to be 512 bytes in size. Figure 2.1. Physical Volume layout 2.1.2. Multiple Partitions on a Disk LVM allows you to create physical volumes out of disk partitions. Red Hat recommends that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons: Administrative convenience It is easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may cause a kernel warning about unknown partition types at boot. Striping performance LVM cannot tell that two physical volumes are on the same physical disk. If you create a striped logical volume when two physical volumes are on the same physical disk, the stripes could be on different partitions on the same disk. This would result in a decrease in performance rather than an increase. Although it is not recommended, there may be specific circumstances when you will need to divide a disk into separate LVM physical volumes. For example, on a system with few disks it may be necessary to move data around partitions when you are migrating an existing system to LVM volumes. Additionally, if you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the disk. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/LVM_components
Chapter 16. Account Console
Chapter 16. Account Console Red Hat build of Keycloak users can manage their accounts through the Account Console. They can configure their profiles, add two-factor authentication, include identity provider accounts, and oversee device activity. Additional resources The Account Console can be configured in terms of appearance and language preferences. An example is adding additional attributes to the Personal info page. For more information, see the Server Developer Guide . 16.1. Accessing the Account Console Procedure Make note of the realm name and IP address for the Red Hat build of Keycloak server where your account exists. In a web browser, enter a URL in this format: server-root /realms/{realm-name}/account. Enter your login name and password. Account Console 16.2. Configuring ways to sign in You can sign in to this console using basic authentication (a login name and password) or two-factor authentication. For two-factor authentication, use one of the following procedures. 16.2.1. Two-factor authentication with OTP Prerequisites OTP is a valid authentication mechanism for your realm. Procedure Click Account security in the menu. Click Signing in . Click Set up Authenticator application . Signing in Follow the directions that appear on the screen to use your mobile device as your OTP generator. Scan the QR code in the screen shot into the OTP generator on your mobile device. Log out and log in again. Respond to the prompt by entering an OTP that is provided on your mobile device. 16.2.2. Two-factor authentication with WebAuthn Prerequisites WebAuthn is a valid two-factor authentication mechanism for your realm. Please follow the WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up a Passkey . Signing In Prepare your Passkey. How you prepare this key depends on the type of Passkey you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your Passkey. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Passkey as second factor. 16.2.3. Passwordless authentication with WebAuthn Prerequisites WebAuthn is a valid passwordless authentication mechanism for your realm. Please follow the Passwordless WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up a Passkey in the Passwordless section. Signing In Prepare your Passkey. How you prepare this key depends on the type of Passkey you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your Passkey. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Passkey as second factor. You no longer need to provide your password to log in. 16.3. Viewing device activity You can view the devices that are logged in to your account. Procedure Click Account security in the menu. Click Device activity . Log out a device if it looks suspicious. Devices 16.4. Adding an identity provider account You can link your account with an identity broker . This option is often used to link social provider accounts. Procedure Log into the Admin Console. Click Identity providers in the menu. Select a provider and complete the fields. Return to the Account Console. Click Account security in the menu. Click Linked accounts . The identity provider you added appears in this page. Linked Accounts 16.5. Accessing other applications The Applications menu item shows users which applications you can access. In this case, only the Account Console is available. Applications 16.6. Viewing group memberships You can view the groups you are associated with by clicking the Groups menu. If you select Direct membership checkbox, you will see only the groups you are direct associated with. Prerequisites You need to have the view-groups account role for being able to view Groups menu. View group memberships
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/account-service
6.2. Exporting Data
6.2. Exporting Data LDAP Data Interchange Format (LDIF) files are used to export database entries from the Directory Server databases. LDIF is a standard format described in RFC 2849 . Note The export operations do not export the configuration information ( cn=config ), schema information ( cn=schema ), or monitoring information ( cn=monitor ). Exporting data can be useful for the following: Backing up the data in the database. Copying data to another Directory Server. Exporting data to another application. Repopulating databases after a change to the directory topology. For example, if a directory contains one database, and its contents should be split into two databases, then the two new databases receive their data by exporting the contents of the old databases and importing it into the two new databases, as illustrated in Figure 6.1, "Splitting a Database Contents into Two Databases" . Figure 6.1. Splitting a Database Contents into Two Databases Warning Do not stop the server during an export operation. Directory Server runs the export operations as the dirsrv user. Therefore, the permissions of the destination directory must allow this user to write the file. 6.2.1. Exporting Data into an LDIF File Using the Command Line Directory Server supports exporting data while the instance is running or while the instance is offline: Use one of the following methods if the instance is running: Use the dsconf backend export command. See Section 6.2.1.1.1, "Exporting a Databases Using the dsconf backend export Command" . Create a cn=tasks entry. See Section 6.2.1.1.2, "Exporting a Database Using a cn=tasks Entry" . If the instance is offline, use the dsctl db2ldif command. See Section 6.2.1.2, "Exporting a Database While the Server is Offline" . Important Do not export LDIF files to the /tmp or /var/tmp/ directories because of the following reasons: Directory Server uses PrivateTmp feature of systemd by default. If you place LDIF files into the /tmp or /var/tmp/ system directory, Directory Server does not see these LDIF files during import. For more information about PrivateTmp , see systemd.exec(5) man page. LDIF files often contain sensitive data, such as user passwords. Therefore, you must not use temporary system directories to store these files. 6.2.1.1. Exporting a Database While the Server is Running 6.2.1.1.1. Exporting a Databases Using the dsconf backend export Command Use the dsconf backend export command to automatically create a task that exports data to an LDIF file. For example, to export the userRoot database: By default, dsconf stores the export in a file called instance_name _ database_name - time_stamp .ldif in the /var/lib/dirsrv/slapd- instance_name /export/ directory. Alternatively, add the -l file_name option to the command to specify a different location. The dsconf backend export command supports additional options, for example, to exclude a specific suffix. To display all available options, enter: 6.2.1.1.2. Exporting a Database Using a cn=tasks Entry The cn=tasks,cn=config entry in the Directory Server configuration is a container entry for temporary entries the server uses to manage tasks. To initiate an export operation, create a task in the cn=export,cn=tasks,cn=config entry. Using a task entry enables you to export data while the server is running. An export task entry requires the following attributes: cn : Sets the unique name of the task. nsInstance : Sets the name of the database to export. nsFilename : Sets the name of the file into which the export should be stored. Export tasks support additional parameters, for example, to exclude suffixes. For a complete list, see the cn=export section in the Red Hat Directory Server Configuration, Command, and File Reference . For example, to add a task that exports the content of the userRoot database into the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file: When the task is completed, the entry is removed from the directory configuration. 6.2.1.2. Exporting a Database While the Server is Offline If the server is offline when you export data, use the dsctl db2ldif command: Stop the instance: Export the database into an LDIF file. For example to export the userRoot database into the /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif file: Start the instance: 6.2.2. Exporting a Suffix to an LDIF File Using the Web Console To export a suffix using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Click Suffix Tasks , and select Export Suffix . Enter the name of the LDIF file in which you want to store the export. Directory Server will store the file in the /var/lib/dirsrv/slapd- instance_name /ldif/ directory using the specified file name. Click Export Database . 6.2.3. Enabling Members of a Group to Export Data and Performing the Export as One of the Group Members You can configure that members of a group have permissions to export data. This increases the security because you no longer need to set the credentials of cn=Directory Manager in your scripts. Additionally, you can easily grant and revoke the export permissions by modifying the group. 6.2.3.1. Enabling a Group to Export Data Use this procedure to add the cn=export_users,ou=groups,dc=example,dc=com group and enable members of this group to create export tasks. Procedure Create the cn=export_users,ou=groups,dc=example,dc=com group: Add access control instructions (ACI) that allows members of the cn=export_users,ou=groups,dc=example,dc=com group to create export tasks: Create a user: Create a user account: Set a password on the user account: Add the uid=example,ou=People,dc=example,dc=com user to the cn=export_users,ou=groups,dc=example,dc=com group: Verification Display the ACIs set on the cn=config : 6.2.3.2. Performing an Export as a Regular User You can perform exports as a regular user instead of cn=Directory Manager . Prerequisites You enabled members of the cn=export_users,ou=groups,dc=example,dc=com group to export data. See Section 6.2.3.1, "Enabling a Group to Export Data" . The user you use to perform the export is a member of the cn=export_users,ou=groups,dc=example,dc=com group. Procedure Create a export task using one of the following methods: Using the dsconf backend export command: By manually creating the task: Verification Verify that the backup was created:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend export userRoot The export task has finished successfully", "dsconf ldap://server.example.com backend export --help", "ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn= example_export ,cn=export,cn=tasks,cn=config changetype: add objectclass: extensibleObject cn: example_export nsInstance: userRoot nsFilename: /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif", "dsctl instance_name stop", "dsctl instance_name db2ldif userroot /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif OK group dirsrv exists OK user dirsrv exists ldiffile: /var/lib/dirsrv/slapd- instance_name /ldif/example.ldif [18/Jul/2018:10:46:03.353656777 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [18/Jul/2018:10:46:03.383101305 +0200] - INFO - ldbm_back_ldbm2ldif - export userroot: Processed 160 entries (100%). [18/Jul/2018:10:46:03.391553963 +0200] - INFO - dblayer_pre_close - All database threads now stopped db2ldif successful", "dsctl instance_name start", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" group create --cn export_users", "ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=config changetype: modify add: aci aci: (target = \"ldap:///cn=export,cn=tasks,cn=config\")(targetattr=\"*\") (version 3.0 ; acl \" permission: Allow export_users group to export data \" ; allow (add, read, search) groupdn = \" ldap:///cn=export_users,ou=groups,dc=example,dc=com \";) - add: aci aci: (target = \"ldap:///cn=config\")(targetattr = \"objectclass || cn || nsslapd-suffix || nsslapd-ldifdir\") (version 3.0 ; acl \" permission: Allow export_users group to access ldifdir attribute \" ; allow (read,search) groupdn = \" ldap:///cn=export_users,ou=groups,dc=example,dc=com \";)", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" user create --uid=\" example \" --cn=\" example \" --uidNumber=\" 1000 \" --gidNumber=\" 1000 \" --homeDirectory=\" /home/example/ \" --displayName=\" Example User \"", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" account reset_password \" uid=example,ou=People,dc=example,dc=com \" \" password \"", "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \" dc=example,dc=com \" group add_member export_users uid=example,ou=People,dc=example,dc=com", "ldapsearch -o ldif-wrap=no -LLLx -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b cn=config aci=* aci -s base dn: cn=config aci: (target = \"ldap:///cn=export,cn=tasks,cn=config\")(targetattr=\"*\")(version 3.0 ; acl \"permission: Allow export_users group to export data\" ; allow (add, read, search) groupdn = \"ldap:///cn=export_users,ou=groups,dc=example,dc=com\";) aci: (target = \"ldap:///cn=config\")(targetattr = \"objectclass || cn || nsslapd-suffix || nsslapd-ldifdir\")(version 3.0 ; acl \"permission: Allow export_users group to access ldifdir attribute\" ; allow (read,search) groupdn = \"ldap:///cn=export_users,ou=groups,dc=example,dc=com\";)", "dsconf -D \" uid=example,ou=People,dc=example,dc=com \" ldap://server.example.com backend export userRoot", "ldapadd -D \" uid=example,ou=People,dc=example,dc=com \" -W -H ldap://server.example.com dn: cn= userRoot-2021_07_23_12:55_00 ,cn=export,cn=tasks,cn=config changetype: add objectClass: extensibleObject nsFilename: /var/lib/dirsrv/slapd-instance_name/ldif/None-userroot-2021_07_23_12:55_00.ldif nsInstance: userRoot cn: export-2021_07_23_12:55_00", "ls -l /var/lib/dirsrv/slapd- instance_name /ldif/*.ldif total 0 -rw-------. 1 dirsrv dirsrv 10306 Jul 23 12:55 None-userroot-2021_07_23_12_55_00.ldif" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/exporting_data
Chapter 7. Designing a secure directory
Chapter 7. Designing a secure directory designing-rhds How Red Hat Directory Server secures the data affects all design areas. Any security design needs to protect the data in the directory and meet the security and privacy needs of both users and applications. Learn how to analyze the security needs and how to design the directory to meet these needs. 7.1. About security threats The directory may be at risk of potential security threats. Understanding the most common threats helps to outline the overall security design. Threats to directory security fall into three main categories: Unauthorized access Unauthorized tampering Denial of service 7.1.1. Unauthorized access Protecting the directory from unauthorized access may seem straightforward, however implementing a secure solution may be more complex than it first appears. The directory information delivery path has a number of potential access points where an unauthorized client may gain access to data. The following scenarios describe just a few examples of how an unauthorized client might access the directory data: An unauthorized client can use another client credentials to access the data. This is particularly likely when the directory uses unprotected passwords. An unauthorized client can also eavesdrop on the information exchanged between a legitimate client and Directory Server. Unauthorized access can occur from inside the company or, if the company is connected to an extranet or to the Internet, from outside the company. The authentication methods, password policies, and access control mechanisms provided by the Directory Server offer efficient ways of preventing unauthorized access. Additional resources Selecting appropriate authentication methods Designing a password policy Designing access control 7.1.2. Unauthorized tampering If intruders gain access to the directory or intercept communications between Directory Server and a client application, they have the potential to modify or tamper with the directory data. The directory service is useless if clients do not trust the data or if the directory itself can not trust the modifications and queries it receives from clients. For example, if the directory can not detect tampering, an attacker can change a client request to the server, or not forward it, and change the server response to the client. TLS and similar technologies can solve this problem by signing information at either end of the connection. Additional resources For more information about using TLS with Directory Server, see Securing server connections 7.1.3. Denial of service In a denial of service attack, the attacker goal is to prevent the directory from providing service to its clients. For example, an attacker might use all of the system resources, therefore preventing anyone else from using these resources. Directory Server can prevent denial of service attacks by setting limits on the resources allocated to a particular bind DN. For more information about setting resource limits based on the user bind DN, see the User management and authentication guide. 7.2. Analyzing security needs Analyze the environment and users to identify specific security needs. The site survey in the chapter Designing the secure directory clarifies some basic decisions about who can read and write the individual pieces of data in the directory. This information forms the basis of the security design. How the directory service is used to support the business defines how security is implemented. A directory that serves an intranet does not require the same security measures as a directory that supports an extranet or e-commerce applications that are open to the Internet. If the directory only serves an intranet, consider what level of access is needed for information: How to provide users and applications with access to the information they need to perform their jobs. How to protect sensitive data regarding employees or the business from general access. If the directory serves an extranet or supports e-commerce applications over the Internet, consider the following additional points: How to offer customers a guarantee of privacy. How to guarantee information integrity. 7.2.1. Determining access rights The data analysis identifies what information users, groups, partners, customers, and applications need to access the directory service. Access rights can be granted in one of two ways: Grant all categories of users as many rights as possible while still protecting sensitive data. An open method requires accurately determining what data is sensitive or critical to the business. Grant each category of users the minimum access they require to do their jobs. A restrictive method requires detailed understanding of the information needs of each category of user inside, and possibly outside, of the organization. Regardless of the method used to determine access rights, create a simple table that lists the categories of users in the organization and the access rights granted to each. Consider creating a table that lists the sensitive data held in the directory and, for each piece of data, the steps taken to protect it. Additional resources For information about checking the identity of users, see section Selecting appropriate authentication methods . For information about restricting access to directory information, see section Designing access control . 7.2.2. Ensuring data privacy and integrity When using the directory to support exchanges with business partners over an extranet or to support e-commerce applications with customers on the Internet, ensure the privacy and the integrity of the exchanged data. Use the following ways to ensure data privacy and integrity: Encrypt data transfers. Use certificates to sign data transfers. Additional resources For information about encryption methods Directory Server provides, see section Password Storage Schemes For information about signing data, see section Securing Server Connections . For information about encrypting sensitive information in the Directory Server database, see section Encrypting the database . 7.2.3. Conducting regular audits As an extra security measure, conduct regular audits to verify the efficiency of the overall security policy by examining the log files and the information that SNMP agents record. Additional resources For more information about monitoring Directory Server, see Monitoring server and database activity For more information about log files, see Log file reference 7.2.4. Example security needs analysis The examples show how the imaginary ISP company example.com analyzes its security needs. The example.com offers web hosting and Internet access. Part of example.com activity is to host the directories of client companies. It also provides Internet access to a number of individual subscribers. Therefore, example.com has three main categories of information in its directory: The example.com internal information Information belonging to corporate customers Information pertaining to individual subscribers The example.com needs the following access controls: Provide access to the directory administrators of hosted companies, such as example_a and example_b , to their own directory information. Implement access control policies for hosted companies directory information. Implement a standard access control policy for all individual clients who use example.com for Internet access from their homes. Deny access to example.com corporate directory to all outsiders. Grant read access to example.com directory of subscribers to the world. 7.3. Overview of security methods Directory Server offers several methods to design an overall security policy that is adapted to specific needs. The security policy should be strong enough to prevent unauthorized users to modify or retrieve sensitive information, but also simple enough to administer easily. A complex security policy can lead to mistakes that either prevent people from accessing information that they need to access or, worse, allow people to modify or retrieve directory information that they should not be allowed to access. Table 7.1. Available security methods in Directory Server Security method Description Authentication Verifies the identity of the other party. For example, a client gives a password to Directory Server during an LDAP bind operation. Password policies Defines the criteria that a password must satisfy to consider this password valid. For example, age, length, and syntax. Encryption Protects the privacy of information. When data is encrypted, only the recipient can understand the data. Access control Tailors the access rights granted to different directory users and provides a way to specify required credentials or bind attributes. Account deactivation Disables a user account, group of accounts, or an entire domain so that Directory Server automatically rejects all authentication attempts. Secure connections Maintains the integrity of information by encrypting connections with TLS, StartTLS, or SASL. If information is encrypted during transmission, the recipient can determine that it was not modified during transit. Secure connections can be required by setting a minimum security strength factor. Auditing Determines if the security of the directory has been compromised. One simple auditing method is reviewing the log files the directory maintains. SELinux Uses security policies on the Red Hat Directory Server machine to restrict and control access to Directory Server files and processes. Combine any number of these tools for maintaining security in the security design, and incorporate other features of the directory service, such as replication and data distribution, to support the security design. 7.4. Selecting appropriate authentication methods A basic decision regarding the security policy is how users access the directory. Are anonymous users allowed to access the directory, or is every user required to log into the directory with a username and password (authenticate)? Learn about authentication methods that Directory Server provides. The directory uses the same authentication mechanism for all users, whether they are people or LDAP-aware applications. 7.4.1. Anonymous and unauthenticated access Anonymous access provides the easiest form of access to the directory. With anonymous access, anyone who connects to the directory can access the data. When you configure anonymous access, you cannot track who performs what kinds of searches, only that someone performs searches. You may attempt to block a specific user or group of users from accessing some kinds of directory data, but, if anonymous access is allowed to that data, those users can still access the data simply by binding to the directory anonymously. You can limit anonymous access. Usually, directory administrators only allow anonymous access for read, search, and compare privileges, not for write, add, delete, or self-write privileges. Often, administrators limit access to a subset of attributes that contain general information, such as names, telephone numbers, and email addresses. You should never allow anonymous access to more sensitive data, such as government identification numbers, for example, Social Security Numbers in the US, home telephone numbers and addresses, and salary information. You can disable anonymous access entirely if you need to tighten rules on who accesses the directory data. An unauthenticated bind is when a user attempts to bind with a user name but without a user password attribute. For example: Directory Server grants anonymous access if the user does not attempt to provide a password. An unauthenticated bind does not require that the bind DN be an existing entry. As with anonymous binds, you can disable unauthenticated binds to increase security by limiting access to the database. In addition, you can disable unauthenticated binds to prevent silent bind failures for clients. Some applications may believe that it authenticated successfully to the directory because it received a bind success message when, in reality, it failed to pass a password and simply connected with an unauthenticated bind. 7.4.2. Simple binds and secure binds If anonymous access is not allowed, users must authenticate to the directory before they can access the directory contents. With simple password authentication, a client authenticates to the server by sending a reusable password. For example, a client authenticates to the directory using a bind operation, which provides a distinguished name and a set of credentials. The server locates the entry in the directory that corresponds to the client DN and checks whether the password given by the client matches the value stored with the entry. If it does, the server authenticates the client. If it does not, the authentication operation fails, and the client receives an error message. The bind DN often corresponds to the entry of a person. However, some directory administrators prefer to bind as an organizational entry rather than as a person. The directory requires the entry used to bind to have an object class that allows the userPassword attribute. This ensures that the directory recognizes the bind DN and password. Most LDAP clients hide the bind DN from the user because users may find the long strings of DN characters hard to remember. When a client attempts to hide the bind DN from the user, it uses the following bind algorithm: The user enters a unique identifier, such as a user ID. For example, fchen . The LDAP client application searches the directory for that identifier and returns the associated distinguished name. For example, uid=fchen,ou=people,dc=example,dc=com . The LDAP client application binds to the directory using the retrieved distinguished name and the password the user supplies. Simple password authentication offers an easy way to authenticate users, however it requires extra security methods. Consider restricting its use to the organization intranet. To use with connections between business partners over an extranet or for transmissions with customers on the Internet, it may be best to require a secure (encrypted) connection. Note The drawback of simple password authentication is that the password is sent in plain text. If an unauthorized user is listening, this can compromise the security of the directory because that person can impersonate an authorized user. The nsslapd-require-secure-binds configuration attribute requires simple password authentication to occur over a secure connection, using TLS or Start TLS. This effectively encrypts the plaintext password so it cannot be sniffed by a malicious actor. Use the nsslapd-require-secure-binds configuration attribute to turn on a secure connection by using TLS or the Start TLS. The SASL authentication or certificate-based authentication are also possible. When Directory Server and a client application establish a secure connection with each other, the client performs a simple bind with an extra level of protection by not transmitting the password in plaintext. Additional resources Securing server connection The nsslapd-require-secure-binds configuration attribute description . 7.4.3. Certificate-based authentication An alternative form of directory authentication involves using digital certificates to bind to the directory. The directory prompts users for a password when they first access it. However, rather than matching a password stored in the directory, the password opens the user certificate database. If the user supplies the correct password, the directory client application obtains authentication information from the certificate database. The client application and the directory then use this information to identify the user by mapping the user certificate to a directory DN. The directory allows or denies access based on the directory DN identified during this authentication process. Additional resources Securing Red Hat Directory Server 7.4.4. Proxy authentication Proxy authentication is a special form of authentication because the user requesting access to the directory does not bind with its own DN but with a proxy DN . The proxy DN is an entity that has appropriate permissions to perform the operation the user requests. When a person or an application receives proxy permissions, they can specify any DN as a proxy DN, with the exception of the Directory Manager DN. One of the main advantages of proxy permissions is that an LDAP application can use a single thread with a single bind to service multiple users making requests against Directory Server. Instead of having to bind and authenticate for each user, the client application binds to the Directory Server using a proxy DN. The proxy DN is specified in the LDAP operation the client application submits. For example: With this command, the manager entry cn=Directory Manager receives permissions of a user cn=joe to apply the modifications to the mods.ldif file. The manager does not need to provide the user password to make this change. Note The proxy mechanism is very powerful and you must use it carefully. Proxy rights are granted within the scope of the access control list (ACL), and when you grant proxy permissions to a user, this user can proxy for any user under the target. You can not restrict the proxy permissions to only certain users. For example, if an entry has proxy permissions to the dc=example,dc=com tree, this entry can do anything. Therefore, ensure that you set the proxy access control instruction (ACI) at the lowest possible level of the directory. Additional resources Managing access control 7.4.5. Pass-through authentication (PTA) Pass-through authentication (PTA) is when Directory Server forwards any authentication request from one server to another server. For example, when Directory Server stores all configuration information for an instance in another directory instance, Directory Server uses pass-through authentication for the User Directory Server to connect to the Configuration Directory Server. PTA plug-in handles Directory Server-to-Directory Server pass-through authentication. Many systems already have authentication mechanisms for Unix and Linux users, such as Pluggable Authentication Modules (PAM). You can configure a PAM module to tell Directory Server to use an existing authentication store for LDAP clients. Directory Server interacts with the PAM service to authenticate LDAP clients by using PAM Pass-through Authentication plug-in. With PAM pass-through authentication, when a user attempts to bind to Directory Server, Directory Server forwards the credentials to the PAM service. If the credentials match the information in the PAM service, then the user can successfully bind to the Directory Server, with all of the Directory Server access control restrictions and account settings. Note You can configure Directory Server to use PAM, however you can not configure PAM to use Directory Server for authentication. You can configure the PAM service by using the System Security Services Daemon (SSSD). Simply point the PAM Pass-through Authentication plug-in to the PAM file that SSSD uses, such as /etc/pam.d/system-auth by default. SSSD can use a variety of different identity providers, including Active Directory, Red Hat Directory Server, or other directories like OpenLDAP, or local system settings. 7.4.6. Passwordless authentication An authentication attempt first evaluates if the user account can authenticate. The account must fall under the following criteria: It must be active. It must not be locked. It must have a valid password according to any applicable password policy. Sometimes a client application needs to perform the authentication of a user account when the user should not or cannot bind to Directory Server for real. For example, a system may be using PAM to manage system accounts, and you configured PAM to use the LDAP directory as its identity store. However, the system uses passwordless credentials, such as SSH keys or RSA tokens, and those credentials cannot be passed to authenticate to the Directory Server. Red Hat Directory Server supports the Account Usability Extension Control extension for LDAP searches. This extension returns an extra line for each returned entry that gives the account status and some information about the password policy for that account. A client or application can then use that status to evaluate authentication attempts made outside Directory Server for that user account. Basically, this control signals whether a user should be allowed to authenticate without having to perform an authentication operation. In addition, you can use this extension with system-level services like PAM to allow passwordless logins which still use Directory Server to store identities and even control account status. Note By default, only the Directory Manager can use the Account Usability Extension Control. To allow other users to use the control, set the appropriate ACI on the supported control entry, oid=1.3.6.1.4.1.42.2.27.9.5.8,cn=features,cn=config . Additional resources Checking account availability for passwordless access 7.5. Designing an account lockout policy An account lockout policy can protect both directory data and user passwords by preventing unauthorized or compromised access to the directory. After Directory Server locks, or deactivates , an account, that user cannot bind to the directory, and any authentication operation fails. Use the nsAccountLock operational attribute to implement the account deactivation. When an entry contains the nsAccountLock attribute with a value of true , the server rejects a bind attempt by that account. Directory Server can define an account lockout policy based on specific, automatic criteria: Directory Server can associate an account lockout policy with the password policy. When a user fails to log in with the proper credentials after a specified number of times, Directory Server locks the account until an administrator manually unlocks it. Such a policy protects against malicious actors who try to break into the directory by repeatedly trying to guess a user password. Directory Server can lock an account after a certain amount of time passed. You can use this policy to control access for temporary users, such as interns, students, or seasonal workers, who have time-limited access based on the time the account was created. Alternatively, you can create an account policy that inactivates user accounts if the account has been inactive for a certain amount of time since the last login time. Use the Account Policy Plug-in to implement a time-based account lockout policy and set global settings for the directory. You can create multiple account policy subentries for different expiration times and types and then apply these policies to entries through classes of service. Additional resources Designing a password policy 7.6. Designing a password policy A password policy is a set of rules that manage how passwords are used in a given system. The Directory Server password policy specifies the criteria that a password must satisfy to be considered valid, like the age, length, and whether users can reuse passwords. 7.6.1. How password policy works Directory Server supports fine-grained password policies, which means Directory Server defines a password policy at any point in the directory tree. Directory Server defines password policies at the following levels: The entire directory Such a policy is known as the global password policy. When you configure and enable this policy, Directory Server applies it to all users within the directory except for the Directory Manager entry and those user entries that have local password policies enabled. This policy type can define a common, single password policy for all directory users. A particular subtree of the directory Such a policy is known as the subtree level or local password policy. When you configure and enable this policy, Directory Server applies it to all users under the specified subtree. This policy type is good in a hosting environment to support different password policies for each hosted company rather than enforcing a single policy for all the hosted companies. A particular user of the directory Such a policy is known as the user level or local password policy. When you configure and enable this policy, Directory Server applies it to the specified user only. This policy type can define different password policies for different directory users. For example, specify that some users change their passwords daily, some users change it monthly, and all other users change it every six months. By default, Directory Server includes entries and attributes that are relevant to the global password policy, meaning the same policy is applied to all users. To set up a password policy for a subtree or user, add additional entries at the subtree or user level and enable the nsslapd-pwpolicy-local attribute of the cn=config entry. This attribute acts as a switch turning fine-grained password policy on and off. You can change password policies by using the command line or the web console. In the command line, the dsconf pwpolicy command changes global policies and the dsconf localpwp command changes local policies. You can find the procedures for setting password policies in the Configuring password policies section. Password policy checking process The password policy entries that you add to the directory determine the type (global or local) of the password policy the Directory Server should enforce. When a user attempts to bind to the directory, Directory Server determines whether a local policy has been defined and enabled for the user entry. Directory Server checks policy settings in the following order: Directory Server determines whether the fine-grained password policy is enabled. The server checks the value ( on or off ) of the nsslapd-pwpolicy-local attribute in the cn=config entry. If the value is set to off , the server ignores the policies defined at the subtree and user levels and enforces the global password policy. Directory Server determines whether a local policy is defined for a subtree or user. The server checks for the pwdPolicysubentry attribute in the corresponding user entry: If the attribute is present, the server enforces the local password policy configured for the user. If the entry has the attribute but the value is empty or invalid (for example, points to a non-existent entry), the server logs an error message. If the pwdPolicysubentry attribute is not found in the user entry, the server checks the parent entry, grandparent entry, and other upper-level entries until the top is reached. If the pwdPolicysubentry attribute is not found in any upper-level entries, the server applies a global policy. The server compares the user-supplied password with the value specified in the user directory entry to make sure they match. The server also uses the rules that the password policy defines to ensure that the password is valid before allowing the user to bind to the directory. In addition to bind requests, password policy checking also occurs during add and modify operations if the userPassword attribute is present in the request. Modifying the value of userPassword checks two password policy settings: The password minimum age policy is activated. If the minimum age requirement is not satisfied, the server returns the constraintViolation error. The password update operation fails. The password history policy is activated. If the new value of the userPassword attribute is in the password history, or if it is the same as the current password, the server returns a constraintViolation error. The password update operation fails. Both adding and modifying the value of userPassword checks password policies set for the password syntax: The password minimum length policy is activated. If the new value of the userPassword attribute is less than the required minimum length, the server returns the constraintViolation error. The password update operation fails. The password syntax checking policy is activated. If the new value of userPassword is the same as another attribute of the entry, the server returns a constraintViolation error. The password update operation fails. 7.6.2. Password policy attributes Learn about the attributes that you can use to create a password policy for the server. Directory Server stores password policy attributes in the cn=config entry, and you can change these settings by using dsconf utility. Maximum number of failures This setting enables password-based account lockouts in the password policy. If a user attempts to log in a certain number of times and fails, Directory Server locks that account until an administrator unlocks it or, optionally, a certain amount of time passes. Use passwordMaxFailure configuration parameter to set the maximum number of failures. Directory Server has two ways to count login attempts and lock an account when login attempts reach the limit: Directory Server locks the account when the number hits ( n ) Directory Server locks the account only when the count exceeds ( n+1 ). For example, if the failure limit is three attempts, the account can be locked at the third failed attempt ( n ) or at the fourth failed attempt ( n+1 ). The n+1 behavior is the historical behavior for LDAP servers, so it is considered as legacy behavior. Newer LDAP clients expect a stricter hard limit. By default, Directory Server uses the strict limit (n), but you can change the legacy behavior in the passwordLegacyPolicy configuration parameter. Password change after reset The Directory Server password policy can specify whether users must change their passwords after the first login or after the administrator has reset the password. The default passwords that the administrator sets typically follow a company convention, such as the user initials, user ID, or company name. If this convention is discovered, it is usually the first value that a malicious actor uses in an attempt to break into the system. Therefore, it is recommended to require users to change their password after an administrator resets these passwords. If you configure this setting for the password policy, users are required to change their password even if user-defined passwords are disabled. If the password policy does not require or does not allow the password change by a user, administrator-assigned passwords should not follow any obvious convention and should be difficult to discover. The default configuration does not require that users change their password after it has been reset. User-defined passwords You can set a password policy to allow or not allow users to change their own passwords. A good password is the key to a strong password policy. Good passwords should not use trivial words, such as dictionary words, names of pets or children, birthdays, user IDs, or any other information about the user that can be easily discovered (or stored in the directory itself). A good password should contain a combination of letters, numbers, and special characters. For the sake of convenience, however, users often use passwords that are easy to remember. Consequently, some enterprises choose to set passwords for users that meet the criteria of a strong password and do not allow users to change their passwords. Setting passwords by administrators for users has the following disadvantages: It requires a substantial amount of administrator time. Because administrator-specified passwords are typically more difficult to remember, users are more likely to write their password down, increasing the risk of discovery. By default, user-defined passwords are allowed. Password expiration The password policy can allow users to use the same passwords indefinitely or specify that passwords expire after a given time. In general, the longer a password is in use, the more likely it is to be discovered. However, if passwords expire too often, users may have trouble remembering them and resort to writing their passwords down. A common policy is to have passwords expire every 30 to 90 days. The server remembers the password expiration specification even if password expiration is disabled. If the password expiration is re-enabled, passwords are valid only for the duration set before it was last disabled. For example, if you configure passwords to expire every 90 days, and then you disable and re-enable the password expiration, the default password expiration duration stays 90 days. By default, user passwords never expire. Expiration warning If you set a password expiration period, it is a good idea to send users a warning before their passwords expire. Directory Server displays the warning when a user binds to the server. If password expiration is enabled, by default, Directory Server sends a warning to a user, by using an LDAP message, one day before the user password expires. The user client application should support this feature. The valid range for a password expiration warning is from one to 24,855 days. Note The password never expires until Directory Server has sent the expiration warning. Grace login limit A grace period for expired passwords means that users can still log in to the system, even if their passwords have expired. To allow some users to log in using an expired password, specify the number of grace login attempts that are allowed to a user after the password has expired. By default, Directory Server does not permit grace logins. Password syntax checking Password syntax checking enforces rules for password strings so that any password has to meet or exceed certain criteria. All password syntax checks can be applied globally, per a subtree, or per a user. The passwordCheckSyntax attribute manages the password syntax checking. The default password syntax requires a minimum password length of eight characters and that no trivial words are used in the password. A trivial word is any value stored in the uid , cn , sn , givenName , ou , or mailattributes of the user entry. Additionally, you can use other forms of password syntax enforcement, providing different optional categories for the password syntax: Minimum number of required characters in the password ( passwordMinLength ). Minimum number of digit characters, meaning numbers between zero and nine ( passwordMinDigits ). Minimum number of ASCII alphabetic characters, both upper- and lower-case ( passwordMinAlphas ). Minimum number of uppercase ASCII alphabetic characters ( passwordMinUppers ). Minimum number of lowercase ASCII alphabetic characters ( passwordMinLowers ). Minimum number of special ASCII characters, such as !@#USD ( passwordMinSpecials ). Minimum number of 8-bit characters (passwordMin8bit). Maximum number of times that the same character can be immediately repeated, such as aaabbb ( passwordMaxRepeats ). Minimum number of character categories a password requires; a category can be upper-case or lower-case letters, special characters, digits, or 8-bit characters ( passwordMinCategories ). Directory Server checks the password against the CrackLib dictionary ( passwordDictCheck ). Directory Server checks if the password contains a palindrome ( passwordPalindrome ). Directory Server prevents setting a password that has more consecutive characters from the same category ( passwordMaxClassChars ). Directory Server prevents setting a password that contains certain strings ( passwordBadWords ). Directory Server prevents setting a password that contains strings set in administrator-defined attributes ( passwordUserAttributes ). The more categories of syntax required, the stronger the password. By default, password syntax checking is disabled. Password length The password policy can require a minimum length for user passwords. In general, shorter passwords are easier to crack. A recommended minimal length for passwords is eight characters. This is long enough to be difficult to crack but short enough that users can remember the password without writing it down. The valid range of values for this attribute is from two to 512 characters. By default, the server does not have a minimum password length. Password minimum age The password policy can prevent users from changing their passwords for a specified time. When you set the passwordMinAge attribute in conjunction with the passwordHistory attribute, users cannot reuse old passwords. For example, if the password minimum age ( passwordMinAge ) attribute is two days, users cannot repeatedly change their passwords during a single session. This prevents them from cycling through the password history so that they can reuse an old password. The valid range of values for the passwordMinAge attribute is from zero to 24 855 days. A value of zero ( 0 ) indicates that the user can change the password immediately. Password history The Directory Server can store from two to 24 passwords in the password history. If a password is in the history, a user cannot reset his password to that old password. This prevents users from reusing a couple of passwords that are easy to remember. Alternatively, you can disable the password history, thus allowing users to reuse passwords. The passwords remain in history even if the password history is off. If the password history is turned back on, users cannot reuse the passwords that were in the history before you disabled the password history. The server does not maintain a password history by default. Password storage schemes The password storage scheme specifies the type of encryption used to store Directory Server passwords within the directory. The Directory Server supports several different password storage schemes: Password-Based Key Derivation Function 2 (PBKDF2_SHA256, PBKDF2-SHA1, PBKDF2-SHA256, PBKDF2-SHA512) This is the most secure password storage scheme. The default storage scheme is PBKDF2-SHA512. Salted Secure Hash Algorithm (SSHA, SSHA-256, SSHA-384, and SSHA-512) The recommended SSHA scheme is SSHA-256 or stronger. CLEAR This means no encryption and is the only option that can be used with SASL Digest-MD5, so using SASL requires the CLEAR password storage scheme. Although passwords a directory stores can be protected through the use of access control information (ACI) instructions, it is still not a good idea to store plain text passwords in the directory. Secure Hash Algorithm (SHA, SHA-256, SHA-384, and SHA-512) This is less secure than SSHA. UNIX CRYPT This algorithm provides compatibility with UNIX passwords. MD5 This storage scheme is less secure than SSHA, but it is included for legacy applications that require MD5. Salted MD5 This storage scheme is more secure than plain MD5 hash, but still less secure than SSHA. This storage scheme is not included for use with new passwords but to help with migrating user accounts from directories that support salted MD5. Password last change time The passwordTrackUpdateTime configuration attribute tells the server to record a timestamp for the last time Directory Server updated a password for an entry. Directory Server stores the password change time as an operational attribute pwdUpdateTime in the user entry, which is separate from the modifyTimestamp or lastModified operational attributes. By default, the server does not store the password last change time. Additional resources Configuration attributes under cn=config entry 7.6.3. Designing a password policy in a replicated environment Directory Server enforces password and account lockout policies in a replicated environment as follows: Password policies are enforced on the data supplier. Account lockout is enforced on all servers in the replication setup. Directory Server replicates password policy information in the directory, such as password age, the account lockout counter, and the expiration warning counter. However, Directory Server does not replicate the configuration information, such as the password syntax and the history of password modifications. Directory Server stores this information locally. When configuring a password policy in a replicated environment, consider the following points: All replicas issue warnings of an impending password expiration. Directory Server keeps this information locally on each server, so if a user binds to several replicas in turn, the user receives the same warning several times. In addition, if the user changes the password, it may take time for replicas to receive this information. If a user changes a password and then immediately rebinds, the bind may fail until the replica registers the changes. The same bind behavior should occur on all servers, including suppliers and replicas. Always create the same password policy configuration information on each server. Account lockout counters may not work as expected in a multi-supplier environment. 7.7. Designing access control After deciding on the authentication schemes, decide how to use those schemes to protect the information contained in the directory. Access control can specify that certain clients have access to particular information, while other clients do not. Use one or more access control lists (ACLs) to define access control. The directory ACLs consist of a series of one or more access control information (ACI) statements that either allow or deny permissions, such as read, write, search, and compare, to specified entries and their attributes. Using the ACL, you can set permissions at any level of the directory tree: The entire directory A particular subtree of the directory Specific entries in the directory A specific set of entry attributes Any entry that matches a given LDAP search filter In addition, you can set permissions for a specific user, for all users belonging to a specific group, or for all users of the directory. You can define access for a network location, such as an IP address (IPv4 or IPv6) or a DNS name. 7.7.1. About the ACI format When designing the security policy, you need to understand how ACIs are represented in the directory, and what permissions you can set. Directory ACIs use the following general form: The ACI variables have the following description: Target Specifies the entry, usually a subtree, that the ACI targets, the attribute it targets, or both. The target identifies the directory element that the ACI applies to. An ACI can target only one entry, but it can target multiple attributes. In addition, the target can contain an LDAP search filter. You can set permissions for widely scattered entries that contain common attribute values. Permission Identifies the actual permission the ACI sets. The permission variable states that the ACI allows or denies a specific type of directory access, such as read or search, to the specified target. Bind rule Identifies the bind DN or network location to which the permission applies. The bind rule may also specify an LDAP filter, and if that filter is evaluated to be true for the binding client application, then the ACI applies to the client application. Therefore, for the directory object target, ACIs allow or deny permission if a bind rule is true. Permission and a bind rule are set as a pair, and every target can have multiple permission-bind rule pairs. You can set multiple access controls for any given target effectively. For example: Additional resources For a complete description of the ACI format, see Managing access control 7.7.1.1. Targets An ACI can target a directory entry and attributes on the entry. Targeting a directory entry includes that entry and all of its child entries in the scope of the permission. If you do not explicitly define a target entry for the ACI, then the ACI targets to the directory entry that contains the ACI statement. An ACI can target only one entry or only those entries that match a single LDAP search filter. Targeting attributes applies the permission to only a subset of attribute values. When you target a set of attributes, specify which attributes an ACI targets or which attributes an ACI does not target explicitly. Excluding attributes in the target sets permission for all but a few attributes an object class structure allows. Additional resources Targeting a directory entry Targeting attributes 7.7.1.2. Permissions Permissions can allow or deny access. Avoid denying permissions, for more details, see Allowing or denying access Permissions can be any operation performed on the directory service: Permission Description Read Indicates if a user can read directory data. Write Indicates if a user can change or create a directory. In addition, this permission allows the user to delete directory data but not the entry itself. However, to delete an entire entry, the user must have the delete permissions. Search Indicates if a user can search the directory data. This differs from read permission in that read permission allows a user to view the directory data if it is returned as part of a search operation. For example, if you allow searching for common names ( cn ) and reading a person room number, then Directory Server can return the room number as part of the common name search. However, a user cannot use the room number as the subject of a search. Use this combination to prevent people from searching who sits in a particular room. Compare Indicates if a user can compare the data. The compare permission implies the ability to search, however, Directory Server does not return actual directory information as a result of the search. Instead, Directory Server returns a simple Boolean value that indicates whether the compared values match. Use compare operation to match userPassword attribute values during directory authentication. Self-write Use the self-write permission only for group management. With this permission, a user can add to or delete themselves from a group. Add Indicates if a user can create child entries under the targeted entry. Delete Indicates if a user can delete the targeted entry. Proxy Indicates that the user can use any other DN, except Directory Manager, to access the directory with the rights of this DN. 7.7.1.3. Bind rules The bind rule defines the bind DNs (users) to which an ACI applies. It can also specify bind attributes, such as time of day or IP address. In addition, bind rules easily define that the ACI applies only to a user own entry. Users can update their own entries without running the risk of a user updating another user entry. Bind rules indicate the following situations when an ACI applies: If the bind operation arrives from a specific IP address (IPv4 or IPv6) or DNS hostname. You can use it to force all directory updates to occur from a given machine or network domain. If a user binds anonymously. Setting permission for anonymous bind means that the permission applies to anyone who binds to the directory. For anyone who successfully binds to the directory. You can use it to allow general access while preventing anonymous access. If a user has bound as the immediate parent of the entry. If a user meets a specific LDAP search criteria. Directory Server provides the following keywords for bind rules: Parent If the bind DN is the immediate parent entry, then the bind rule is true. You can grant specific permissions that allow a directory entry to manage its immediate child entries. Self If the bind DN is the same as the entry requesting access, then the bind rule is true. You can grant specific permissions to allow individuals to update their own entries. All The bind rule is true for anyone who has successfully bound to the directory. Anyone The bind rule is true for everyone. Use this keyword to allow or deny anonymous access. 7.7.2. Setting permissions By default, Directory Server denies access of any kind to all users, with the exception of the Directory Manager. Consequently, you must set ACIs for users to be able to access the directory. 7.7.2.1. The precedence rule When a user attempts any type of access to a directory entry, Directory Server checks the access control set in the directory. To determine access, Directory Server applies the precedence rule . This rule states that when two conflicting permissions exist, the permission that denies access always takes precedence over the permission that grants access. For example, if Directory Server denies write permission at the directory root level, and that permission applies to everyone accessing the directory, then no user can write to the directory regardless of any other permissions that may allow write access. To allow a specific user to write permissions to the directory, you need to set the scope of the original deny-for-write so that it does not include that user. Then, you need to set an additional allow-for-write permission for the user. 7.7.2.2. Allowing or denying access You can allow or deny access to the directory tree, but be careful of explicitly denying the access. Because of the precedence rule, if Directory Server finds rules that deny access at a higher level of the directory, it denies access at lower levels regardless of any conflicting permissions that may grant access. Limit the scope of allow access rules to include only the smallest possible subset of users or client applications. For example, you can set permissions to allow users to write to any attribute on their directory entry, but then deny all users except members of the Directory Administrators group the privilege of writing to the uid attribute. Alternatively, write two access rules that allow write access in the following ways: Create one rule that allows write privileges to every attribute except the uid attribute. This rule should apply to everyone. Create one rule that allows write privileges to the uid attribute. This rule should apply only to members of the Directory Administrators group. Providing only allow privileges avoids the need to set an explicit deny privilege. 7.7.2.3. When to deny access It is rarely necessary to set an explicit deny privilege, however it is useful in the following cases: You have a large directory tree with a complex ACL spread across it. For security reasons, Directory Server may need to suddenly deny access to a particular user, group, or physical location. Rather than spending the time to carefully examine the existing ACL to understand how to restrict the allow permissions, temporarily set the explicit deny privilege until you have time to do the analysis. If the ACL becomes this complex, then the deny ACI only adds costs to the administrative overhead in the future. As soon as possible, rework the ACL to avoid the explicit deny privilege and then simplify the overall access control scheme. You set access control based on a day of the week or an hour of the day. For example, Directory Server can deny all writing activities from Sunday at 11:00 p.m. ( 2300 ) to Monday at 1:00 a.m. ( 0100 ). From an administrative point of view, it may be easier to manage an ACI that explicitly restricts time-based access of this type than to search through the directory for all the allow-for-write ACIs and restrict their scopes in this time frame. You restrict privileges when delegating directory administration authority to multiple people. To allow a person or group of people to manage some part of the directory tree, without allowing them to modify some aspect of the tree, use an explicit deny privilege. For example, to make sure that Mail Administrators do not allow write access to the common name ( cn ) attribute, set an ACI that explicitly denies write access to the common name attribute. 7.7.2.4. Where to place access control rules You can add access control rules to any entry in the directory. Often, administrators add access control rules to entries with the object classes domainComponent , country , organization , organizationalUnit , inetOrgPerson , or group . Organize rules into groups as much as possible in order to simplify ACL management. Rules apply to their target entry and to all of that entry children. Consequently, it is best to place access control rules on root points in the directory or on directory branch points, rather than scatter them across individual leaf entries, such as person. 7.7.2.5. Using filtered access control rules You can use LDAP search filters to set access to any directory entry that matches a defined set of criteria. For example, allow read access for any entry that contains an organizationalUnit attribute that is set to Marketing . Filtered access control rules allow predefined levels of access. For example, the directory contains home address and telephone number information. Some people want to publish this information, while others want to be unlisted. You can use the following way to configure access: Add an attribute to every user directory entry called publishHomeContactInfo . Set an access control rule that grants read access to the homePhone and homePostalAddress attributes only for entries whose publishHomeContactInfo attribute is set to true (enabled). Use an LDAP search filter to express the target for this rule. Allow the directory users to change the value of their own publishHomeContactInfo attribute to true or false. In this way, the directory user can decide whether this information is publicly available. Additional resources LDAP search filters 7.7.3. Viewing ACIs: Get effective rights Get effective rights (GER) is an extended ldapsearch command which returns the access control permissions set on each attribute within an entry. With this search, an LDAP client can determine what operations the server access control configuration allows a user to perform. The access control information is divided into two groups of access: entry rights and attribute rights. Entry rights are the rights, such as modify or delete, that are limited to that specific entry. Attribute rights are the access rights to every instance of that attribute throughout the directory. Such a detailed access control may be necessary in the following situations: You can use the GER commands to better organize access control instructions for the directory. It is often necessary to restrict what one group of users can view or edit compared to another group. For example, members of the QA Managers group may have the right to search and read attributes like manager and salary but only HR Group members have the right to modify or delete them. Checking effective rights for a user or group is one way to verify that an administrator sets the appropriate access controls. You can use the GER commands to see what attributes you can view or modify on your personal entry. For example, a user should have access to attributes such as homePostalAddress and cn , but may only have read access to manager and salary attributes. Additional resources Checking access rights on entries using Get Effective Rights search Common scenarios for a Get Effective Rights search 7.7.4. Using ACIs: Some hints and tricks The following tips can help to lower the administrative burden of managing the directory security model and improve the directory performance characteristics: Minimize the number of ACIs in the directory. Although the Directory Server can evaluate over 50,000 ACIs, it is difficult to manage a large number of ACI statements. A large number of ACIs makes it hard for human administrators to immediately determine the directory object available to particular clients. Directory Server minimizes the number of ACIs in the directory by using macros. Use the macro to represent a DN, or its part, in the ACI target or in the bind rule, or both. Balance allow and deny permissions. Although the default rule is to deny access to any user who does not have specifically granted access, it may be better to reduce the number of ACIs by using one ACI to allow access close to the root of the tree, and a small number of deny ACIs close to the leaf entries. This scenario avoids the use of multiple allow ACIs close to the leaf entries. Identify the smallest set of attributes in an ACI. When allowing or denying access to a subset of attributes, choose if the smallest list is the set of attributes that are allowed or the set of attributes that are denied. Then set the ACI so that it only requires managing the smallest list. For example, the person object class contains a large number of attributes. To allow a user to update only a few attributes, write the ACI that allows write access for only those attributes. However, to allow a user to update all attributes, except the few attributes, create the ACI that allows write access for everything except these few named attributes. Use LDAP search filters carefully. Search filters do not directly name the object for which you manage access. Consequently, their use can produce unexpected results. Especially, when the directory becomes more complex. Before using search filters in ACIs, run an ldapsearch operation using the same filter to make the result clear. Do not duplicate ACIs in differing parts of the directory tree. Guard against overlapping ACIs. For example, if there is an ACI at the directory root point that allows a group write access to the commonName and givenName attributes, and another ACI that allows the same group write access for only the commonName attribute, then consider updating the ACIs so that only one control grants the write access to the group. When the directory grows more complex, the risk of accidentally overlapping ACIs quickly increases. By avoiding ACI overlap, security management becomes easier by reducing the total number of ACIs contained in the directory. Name ACIs. While naming ACIs is optional, giving each ACI a short, meaningful name helps with managing the security model. Group ACIs as closely together as possible within the directory. Try to limit ACI location to the directory root point and to major directory branch points. Grouping ACIs helps to manage the total list of ACIs, as well as helping keep the total number of ACIs in the directory to a minimum. Avoid using double negatives, such as deny write if the bind DN is not equal to cn=Joe . Although this syntax is perfectly acceptable for the server, it is not human-readable. Additional resources Using macro access control instructions 7.7.5. Applying ACIs to the root DN (Directory Manager) Normally, access control rules do not apply to the Directory Manager user. The Directory Manager is defined in the dse.ldif file, not in the regular user database, and ACI targets do not include that user. The Directory Manager requires a high level of access in order to perform maintenance tasks and to respond to incidents. However, you can grant a certain level of access control to the Directory Manager to prevent unauthorized access or attacks from being performed as the root user. Use the RootDN Access Control plug-in to sets certain access control rules specific to the Directory Manager user: Time-based access controls, to allow or deny access on certain days and specific time ranges. IP address rules, to allow or deny access from defined IP addresses, subnets, and domains. Host access rules, to allow or deny access from specific hosts, domains, and subdomains. You can set only one access control rule for the Directory Manager. It is in the plug-in entry, and it applies to the entire directory. Important Ensure that the Directory Manager account has an appropriate level of access. This administrative user might need to perform maintenance operations in off-hours or to respond to failures. In this case, setting a too restrictive time or day rule can prevent the Directory Manager user from managing the directory effectively. Additional resources Setting access control on the Directory Manager account . 7.8. Encrypting the database Database stores information in plain text. Consequently, access control measures may not sufficiently protect some extremely sensitive information, such as government identification numbers or passwords. It may be possible to gain access to a server persistent storage files, either directly through the file system or by accessing discarded disk drives or archive media. With database encryption, individual attributes can be encrypted as they are stored in the database. When configured, every instance of a particular attribute, even index data, is encrypted and can only be accessed using a secure channel, such as TLS. Additional resources For information on using database encryption, see the Managing attribute encryption chapter. 7.9. Securing server connections After designing the authentication scheme for identified users and the access control scheme for protecting information in the directory, the step is to design a way to protect the integrity of the information as it passes between servers and client applications. For both server-to-client connections and server-to-server connections, the Directory Server supports a variety of secure connection types: Transport Layer Security (TLS) Directory Server can use LDAP over the TLS to provide secure communications over the network. The encryption method selected for a particular connection is the result of a negotiation between the client application and Directory Server. Start TLS Directory Server also supports Start TLS, a method of initiating a Transport Layer Security (TLS) connection over a regular, unencrypted LDAP port. Simple Authentication and Security Layer (SASL) SASL is a security framework that you can use to configure different mechanisms to authenticate a user to the server, depending on what mechanism you enable in both client and server applications. In addition, SASL can establish an encrypted session between the client and a server. Directory Server uses SASL with GSS-API, to enable Kerberos logins, and for almost all server-to-server connections, including replication, chaining, and pass-through authentication. Directory Server cannot use SASL with Windows synchronization. Secure connections are recommended for any operations which handle sensitive information, such as replication, and are mandatory for some operations, such as Windows password synchronization. Directory Server can support TLS connections, SASL, and non-secure connections simultaneously. Directory Server can support both SASL authentication and TLS connections at the same time. For example, you configured a Directory Server instance to require TLS connections to the server and also support SASL authentication for replication connections. This means it is not necessary to choose whether to use TLS or SASL in a network environment. In addition, you can set a minimum level of security for connections to the server. The security strength factor measures, in key strength, how strong a secure connection is. You can set an ACI that requires certain operations, such as password changes, occur only if the connection is of a certain strength or higher. You can also set a minimum SSF that can essentially disable standard connections and requires TLS, Start TLS, or SASL for every connection. The Directory Server supports TLS and SASL simultaneously, and the server calculates the SSF of all available connection types and selects the strongest one. Additional resources For more information about using TLS, Start TLS, and SASL, see Securing Red Hat Directory Server 7.10. Using SELinux policies SELinux is a collection of security policies that define access controls for the applications, processes, and files on a system. Security policies are a set of rules that tell SELinux what can or cannot be accessed to prevent unauthorized access and tampering. SELinux categorizes files, directories, ports, processes, users, and other objects on the server. SELinux places each object in an appropriate security context to define how the object is allowed to behave on the server through its role, user, and security level. SELinux groups these roles for objects into domains, and SELinux rules define how the objects in one domain are allowed to interact with objects in another domain. Directory Server has the following domains: dirsrv_t for the Directory Server dirsrv_snmp_t for the SNMP ldap_port_t for LDAP ports These domains provide security contexts for all of the processes, files, directories, ports, sockets, and users for the Directory Server: SELinux labels files and directories for each instance with a specific security context. Most of the main directories that Directory Server uses have subdirectories for all local instances, no matter how many, therefore SELinux easily applies a single policy to new instances. SELinux labels ports for each instance with a specific security context. SELinux constrains all Directory Server processes within an appropriate domain. Each domain has specific rules that define what actions are authorized for the domain. SELinux denies any access to the instance if SELinux policy does not specify it. SELinux has three different levels of enforcement: disabled No SELinux permissive SELinux processes rules are processed, however does not enforce them. enforcing SELinux strictly enforces all rules. Red Hat Directory Server has defined SELinux policies that allow it to run as normal under strict SELinux enforcing mode. Directory Server can run in different modes, one for normal operations and one for database operations, such as import ( ldif2db mode). The SELinux policies for Directory Server apply only to normal mode. By default, Directory Server runs in normal mode with SELinux policies. Additional resources How does SELinux work
[ "ldapsearch -x -D \"cn=jsmith,ou=people,dc=example,dc=com\" -b \"dc=example,dc=com\" \"(cn=joe)\"", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -X \"dn:cn=joe,dc=example,dc=com\" -f mods.ldif", "target permission bind_rule", "target (permission bind_rule)(permission bind_rule)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/planning_and_designing_directory_server/assembly_designing-secure-directory_designing-rhds
Chapter 23. service
Chapter 23. service The name of the service associated with the logging entity, if available. For example, syslog's APP-NAME and rsyslog's programname properties are mapped to the service field. Data type keyword
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/service
Chapter 19. File Systems
Chapter 19. File Systems FS-Cache FS-Cache in Red Hat Enterprise Linux 6 enables networked file systems (for example, NFS) to have a persistent cache of data on the client machine. Package: cachefilesd-0.10.2-3
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/chap-red_hat_enterprise_linux-6.9_technical_notes-technology_previews-file_systems
4.2. Basic Requirements and Setup
4.2. Basic Requirements and Setup To set up a KVM virtual machine on Red Hat Enterprise Linux 6, your system must meet the following criteria: Architecture Virtualization with the KVM hypervisor is currently only supported on Intel 64 and AMD64 systems. Disk space and RAM Minimum: 6 GB free disk space 2 GB RAM Customer Portal subscription To install virtualization packages, your host machine must be registered and subscribed to the Red Hat Customer Portal. To register run the subscription-manager register command and follow the prompts. Alternatively, run the Red Hat Subscription Manager application from Applications System Tools on the desktop to register. If you do not have a valid Red Hat subscription, visit the Red Hat online store to obtain one. For more information on registering and subscribing a system to the Red Hat Customer Portal, see https://access.redhat.com/solutions/253273 . Required packages Before you can use virtualization, a basic set of virtualization packages must be installed on your computer. Procedure 4.1. Installing the virtualization packages with yum To use virtualization on Red Hat Enterprise Linux the libvirt , qemu-kvm and qemu-img packages must be installed. These packages provide the user-level KVM emulator and disk image manager on the host system. Install the qemu-kvm , qemu-img , libvirt , and virt-manager packages with the following command: Download a Red Hat Enterprise Linux 7 Workstation binary DVD ISO image from the Red Hat Customer Portal . This image will be used to install the guest virtual machine's operating system. Note If you encounter any problems during the installation process, see the Troubleshooting section of the Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide .
[ "yum install qemu-kvm qemu-img libvirt virt-manager" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-Virtualization_Getting_Started-Quickstart_Requirements
7.3. Inkjet Printers
7.3. Inkjet Printers An Inkjet printer uses one of the most popular printing technologies today. The relatively low cost and multi-purpose printing abilities make inkjet printers a good choice for small businesses and home offices. Inkjet printers use quick-drying, water-based inks and a printhead with a series of small nozzles that spray ink onto the surface of the paper. The printhead assembly is driven by a belt-fed motor that moves the printhead across the paper. Inkjets were originally manufactured to print in monochrome (black and white) only. However, the printhead has since been expanded and the nozzles increased to accommodate cyan, magenta, yellow, and black. This combination of colors (called CMYK ) allows the printing of images with nearly the same quality as a photo development lab (when using certain types of coated paper.) When coupled with crisp and highly readable text print quality, inkjet printers are a sound all-in-one choice for monochrome or color printing needs. 7.3.1. Inkjet Consumables Inkjet printers tend to be low cost and scale slightly upward based on print quality, extra features, and the ability to print on larger formats than the standard legal or letter paper sizes. While the one-time cost of purchasing an inkjet printer is lower than other printer types, there is the factor of inkjet consumables that must be considered. Because demand for inkjets is large and spans the computing spectrum from home to enterprise, the procurement of consumables can be costly. Note When shopping for an inkjet printer, always make sure you know what kind of ink cartridge(s) it requires. This is especially critical for color units. CMYK inkjet printers require ink for each color; however, the important point is whether each color is stored in a separate cartridge or not. Some printers use one multi-chambered cartridge; unless some sort of refilling process is possible, as soon as one color ink runs out, the entire cartridge must be replaced. Other printers use a multi-chambered cartridge for cyan, magenta, and yellow, but also have a separate cartridge for black. In environments where a great deal of text is printed, this type of arrangement can be beneficial. However, the best solution is to find a printer with separate cartridges for each color; you can then easily replace any color whenever it runs out. Some inkjet manufacturers also require you to use specially treated paper for printing high-quality images and documents. Such paper uses a moderate to high gloss coating formulated to absorb colored inks, which prevents clumping (the tendency for water-based inks to collect in certain areas where colors blend, causing muddiness or dried ink blots) or banding (where the print output has a striped pattern of extraneous lines on the printed page.) Consult your printer's documentation for recommended papers.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-printers-types-inkjet
Chapter 1. Overview of Streams for Apache Kafka
Chapter 1. Overview of Streams for Apache Kafka AMQ streams supports highly scalable, distributed, and high-performance data streaming based on the Apache Kafka project. The main components comprise: Kafka Broker Messaging broker responsible for delivering records from producing clients to consuming clients. Kafka Streams API API for writing stream processor applications. Producer and Consumer APIs Java-based APIs for producing and consuming messages to and from Kafka brokers. Kafka Bridge Streams for Apache Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. Kafka Connect A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka MirrorMaker Replicates data between two Kafka clusters, within or across data centers. Kafka Exporter An exporter used in the extraction of Kafka metrics data for monitoring. A cluster of Kafka brokers is the hub connecting all these components. Figure 1.1. Streams for Apache Kafka architecture 1.1. Using the Kafka Bridge to connect with a Kafka cluster You can use the Streams for Apache Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. Additional resources For information on installing and using the Kafka Bridge, see Using the Streams for Apache Kafka Bridge . 1.2. Document conventions User-replaced values User-replaced values, also known as replaceables , are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used. For example, the following code shows that <bootstrap_address> and <topic_name> must be replaced with your own address and topic name: bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:<port> --topic <topic_name> --from-beginning
[ "bin/kafka-console-consumer.sh --bootstrap-server <broker_host>:<port> --topic <topic_name> --from-beginning" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/overview-str
Chapter 10. Deprecated functionality
Chapter 10. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 10.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs: auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. Bugzilla:1642765 [1] The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. Bugzilla:1637872 [1] The Kickstart autostep command has been deprecated The autostep command has been deprecated. The related section about this command has been removed from the RHEL 8 documentation . Bugzilla:1904251 [1] 10.2. Security NSS SEED ciphers are deprecated The Mozilla Network Security Services ( NSS ) library will not support TLS cipher suites that use a SEED cipher in a future release. To ensure smooth transition of deployments that rely on SEED ciphers when NSS removes support, Red Hat recommends enabling support for other cipher suites. Note that SEED ciphers are already disabled by default in RHEL. Bugzilla:1817533 TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. Bugzilla:1660839 DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. Bugzilla:1646541 [1] fapolicyd.rules is deprecated The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules replaces the /etc/fapolicyd/fapolicyd.rules file. The fagenrules script now merges all component rule files in this directory to the /etc/fapolicyd/compiled.rules file. Rules in /etc/fapolicyd/fapolicyd.trust are still processed by the fapolicyd framework but only for ensuring backward compatibility. Bugzilla:2054741 SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. Bugzilla:1645153 [1] NTLM and Krb4 are deprecated in Cyrus SASL The NTLM and Kerberos 4 authentication protocols have been deprecated and might be removed in a future major version of RHEL. These protocols are no longer considered secure and have already been removed from upstream implementations. Jira:RHELDOCS-17380 [1] Runtime disabling SELinux using /etc/selinux/config is now deprecated Runtime disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config file has been deprecated. In RHEL 9, when you disable SELinux only through /etc/selinux/config , the system starts with SELinux enabled but with no policy loaded. If your scenario really requires to completely disable SELinux, Red Hat recommends disabling SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title. Bugzilla:1932222 The ipa SELinux module removed from selinux-policy The ipa SELinux module has been removed from the selinux-policy package because it is no longer maintained. The functionality is now included in the ipa-selinux subpackage. If your scenario requires the use of types or interfaces from the ipa module in a local SELinux policy, install the ipa-selinux package. Bugzilla:1461914 [1] TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. Bugzilla:1657927 [1] crypto-policies derived properties are now deprecated With the introduction of scopes for crypto-policies directives in custom policies, the following derived properties have been deprecated: tls_cipher , ssh_cipher , ssh_group , ike_protocol , and sha1_in_dnssec . Additionally, the use of the protocol property without specifying a scope is now deprecated as well. See the crypto-policies(7) man page for recommended replacements. Bugzilla:2011208 10.3. Subscription management The --token option of the subscription-manager command is deprecated The --token=<TOKEN> option of the subscription-manager register command is an authentication method that helps register your system to Red Hat. This option depends on capabilities offered by the entitlement server. The default entitlement server, subscription.rhsm.redhat.com , is planning to turn off this capability. As a consequence, attempting to use subscription-manager register --token=<TOKEN> might fail with the following error message: You can continue registering your system using other authorization methods, such as including paired options --username / --password and --org / --activationkey of the subscription-manager register command. Bugzilla:2170082 10.4. Software management rpmbuild --sign is deprecated The rpmbuild --sign command is deprecated since RHEL 8.1. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. Bugzilla:1688849 10.5. Shells and command-line tools The OpenEXR component has been deprecated The OpenEXR component has been deprecated. Hence, the support for the EXR image format has been dropped from the imagecodecs module. Bugzilla:1886310 The dump utility from the dump package has been deprecated The dump utility used for backup of file systems has been deprecated and will not be available in RHEL 9. In RHEL 9, Red Hat recommends using the tar , dd , or bacula , backup utility, based on type of usage, which provides full and safe backups on ext2, ext3, and ext4 file systems. Note that the restore utility from the dump package remains available and supported in RHEL 9 and is available as the restore package. Bugzilla:1997366 [1] The hidepid=n mount option is not supported in RHEL 8 systemd The mount option hidepid=n , which controls who can access information in /proc/[pid] directories, is not compatible with systemd infrastructure provided in RHEL 8. In addition, using this option might cause certain services started by systemd to produce SELinux AVC denial messages and prevent other operations from completing. For more information, see the related Knowledgebase solution Is mounting /proc with "hidepid=2" recommended with RHEL7 and RHEL8? . Bugzilla:2038929 The /usr/lib/udev/rename_device utility has been deprecated The udev helper utility /usr/lib/udev/rename_device for renaming network interfaces has been deprecated. Bugzilla:1875485 The ABRT tool has been deprecated The Automatic Bug Reporting Tool (ABRT) for detecting and reporting application crashes has been deprecated in RHEL 8. As a replacement, use the systemd-coredump tool to log and store core dumps, which are automatically generated files after a program crashes. Bugzilla:2055826 [1] The ReaR crontab has been deprecated The /etc/cron.d/rear crontab from the rear package has been deprecated in RHEL 8 and will not be available in RHEL 9. The crontab checks every night whether the disk layout has changed, and runs rear mkrescue command if a change happened. If you require this functionality, after an upgrade to RHEL 9, configure periodic runs of ReaR manually. Bugzilla:2083301 The SQLite database backend in Bacula has been deprecated The Bacula backup system supported multiple database backends: PostgreSQL, MySQL, and SQLite. The SQLite backend has been deprecated and will become unsupported in a later release of RHEL. As a replacement, migrate to one of the other backends (PostgreSQL or MySQL) and do not use the SQLite backend in new deployments. Jira:RHEL-6859 The raw command has been deprecated The raw ( /usr/bin/raw ) command has been deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. Jira:RHELPLAN-133171 [1] 10.6. Networking The PF_KEYv2 kernel API is deprecated Applications can configure the kernel's IPsec implementation by using the PV_KEYv2 and the newer netlink API. PV_KEYv2 is not actively maintained upstream and misses important security features, such as modern ciphers, offload, and extended sequence number support. As a result, starting with RHEL 8.9, the PV_KEYv2 API is deprecated. If you use this kernel API in your application, migrate it to use the modern netlink API as an alternative. Jira:RHEL-1257 [1] Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. Bugzilla:1647725 [1] The dropwatch tool is deprecated The dropwatch tool has been deprecated. The tool will not be supported in future releases, thus it is not recommended for new deployments. As a replacement of this package, Red Hat recommends to use the perf command line tool. For more information on using the perf command line tool, see the Getting started with Perf section on the Red Hat customer portal or the perf man page. Bugzilla:1929173 The xinetd service has been deprecated The xinetd service has been deprecated and will be removed in RHEL 9. As a replacement, use systemd . For further details, see How to convert xinetd service to systemd . Bugzilla:2009113 [1] The cgdcbxd package is deprecated Control group data center bridging exchange daemon ( cgdcbxd ) is a service to monitor data center bridging (DCB) netlink events and manage the net_prio control group subsystem. Starting with RHEL 8.5, the cgdcbxd package is deprecated and will be removed in the major RHEL release. Bugzilla:2006665 The WEP Wi-Fi connection method is deprecated The insecure wired equivalent privacy (WEP) Wi-Fi connection method is deprecated in RHEL 8 and will be removed in RHEL 9.0. For secure Wi-Fi connections, use the Wi-Fi Protected Access 3 (WPA3) or WPA2 connection methods. Bugzilla:2029338 The unsupported xt_u32 module is now deprecated Using the unsupported xt_u32 module, users of iptables can match arbitrary 32 bits in the packet header or payload. Since RHEL 8.6, the xt_u32 module is deprecated and will be removed in RHEL 9. If you use xt_u32 , migrate to the nftables packet filtering framework. For example, first change your firewall to use iptables with native matches to incrementally replace individual rules, and later use the iptables-translate and accompanying utilities to migrate to nftables . If no native match exists in nftables , use the raw payload matching feature of nftables . For details, see the raw payload expression section in the nft(8) man page. Bugzilla:2061288 The term slaves is deprecated in the nmstate API Red Hat is committed to using conscious language. Therefore the slaves term is deprecated in the Nmstate API. Use the term port when you use nmstatectl . Jira:RHELDOCS-17641 10.7. Kernel The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. Bugzilla:1878207 [1] The Linux firewire sub-system and its associated user-space components are deprecated in RHEL 8 The firewire sub-system provides interfaces to use and maintain any resources on the IEEE 1394 bus. In RHEL 9, firewire will no longer be supported in the kernel package. Note that firewire contains several user-space components provided by the libavc1394 , libdc1394 , libraw1394 packages. These packages are subject to the deprecation as well. Bugzilla:1871863 [1] Installing RHEL for Real Time 8 using diskless boot is now deprecated Diskless booting allows multiple systems to share a root file system through the network. While convenient, diskless boot is prone to introducing network latency in real-time workloads. With a future minor update of RHEL for Real Time 8, the diskless booting feature will no longer be supported. Bugzilla:1748980 Kernel live patching now covers all RHEL minor releases Since RHEL 8.1, kernel live patches have been provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important Common Vulnerabilities and Exposures (CVEs). To accommodate the maximum number of concurrently covered kernels and use cases, the support window for each live patch has been decreased from 12 to 6 months for every minor, major, and zStream version of the kernel. It means that on the day a kernel live patch is released, it will cover every minor release and scheduled errata kernel delivered in the past 6 months. For more information about this feature, see Applying patches with kernel live patching . For details about available kernel live patches, see Kernel Live Patch life cycles . Bugzilla:1958250 The crash-ptdump-command package is deprecated The crash-ptdump-command package, which is a ptdump extension module for the crash utility, is deprecated and might not be available in future RHEL releases. The ptdump command fails to retrieve the log buffer when working in the Single Range Output mode and only works in the Table of Physical Addresses (ToPA) mode. crash-ptdump-command is currently not maintained upstream Bugzilla:1838927 [1] 10.8. Boot loader The kernelopts environment variable has been deprecated In RHEL 8, the kernel command-line parameters for systems using the GRUB bootloader were defined in the kernelopts environment variable. The variable was stored in the /boot/grub2/grubenv file for each kernel boot entry. However, storing the kernel command-line parameters using kernelopts was not robust. Therefore, with a future major update of RHEL, kernelopts will be removed and the kernel command-line parameters will be stored in the Boot Loader Specification (BLS) snippet instead. Bugzilla:2060759 10.9. File systems and storage The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the TuneD service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . Bugzilla:1665295 [1] NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. Bugzilla:1592011 [1] peripety is deprecated The peripety package is deprecated since RHEL 8.3. The Peripety storage event notification daemon parses system storage logs into structured storage events. It helps you investigate storage issues. Bugzilla:1871953 VDO write modes other than async are deprecated VDO supports several write modes in RHEL 8: sync async async-unsafe auto Starting with RHEL 8.4, the following write modes are deprecated: sync Devices above the VDO layer cannot recognize if VDO is synchronous, and consequently, the devices cannot take advantage of the VDO sync mode. async-unsafe VDO added this write mode as a workaround for the reduced performance of async mode, which complies to Atomicity, Consistency, Isolation, and Durability (ACID). Red Hat does not recommend async-unsafe for most use cases and is not aware of any users who rely on it. auto This write mode only selects one of the other write modes. It is no longer necessary when VDO supports only a single write mode. These write modes will be removed in a future major RHEL release. The recommended VDO write mode is now async . For more information on VDO write modes, see Selecting a VDO write mode . Jira:RHELPLAN-70700 [1] VDO manager has been deprecated The python-based VDO management software has been deprecated and will be removed from RHEL 9. In RHEL 9, it will be replaced by the LVM-VDO integration. Therefore, it is recommended to create VDO volumes using the lvcreate command. The existing volumes created using the VDO management software can be converted using the /usr/sbin/lvm_import_vdo script, provided by the lvm2 package. For more information on the LVM-VDO implementation, see Deduplicating and compressing logical volumes on RHEL . Bugzilla:1949163 cramfs has been deprecated Due to lack of users, the cramfs kernel module is deprecated. squashfs is recommended as an alternative solution. Bugzilla:1794513 [1] 10.10. High availability and clusters pcs commands that support the clufter tool have been deprecated The pcs commands that support the clufter tool for analyzing cluster configuration formats have been deprecated. These commands now print a warning that the command has been deprecated and sections related to these commands have been removed from the pcs help display and the pcs(8) man page. The following commands have been deprecated: pcs config import-cman for importing CMAN / RHEL6 HA cluster configuration pcs config export for exporting cluster configuration to a list of pcs commands which recreate the same cluster Bugzilla:1851335 [1] 10.11. Dynamic programming languages, web and database servers The mod_php module provided with PHP for use with the Apache HTTP Server has been deprecated The mod_php module provided with PHP for use with the Apache HTTP Server in RHEL 8 is available but not enabled in the default configuration. The module is no longer available in RHEL 9. Since RHEL 8, PHP scripts are run using the FastCGI Process Manager ( php-fpm ) by default. For more information, see Using PHP with the Apache HTTP Server . Bugzilla:2225332 10.12. Compilers and development tools The gdb.i686 packages are deprecated In RHEL 8.1, the 32-bit versions of the GNU Debugger (GDB), gdb.i686 , were shipped due to a dependency problem in another package. Because RHEL 8 does not support 32-bit hardware, the gdb.i686 packages are deprecated since RHEL 8.4. The 64-bit versions of GDB, gdb.x86_64 , are fully capable of debugging 32-bit applications. If you use gdb.i686 , note the following important issues: The gdb.i686 packages will no longer be updated. Users must install gdb.x86_64 instead. If you have gdb.i686 installed, installing gdb.x86_64 will cause yum to report package gdb-8.2-14.el8.x86_64 obsoletes gdb < 8.2-14.el8 provided by gdb-8.2-12.el8.i686 . This is expected. Either uninstall gdb.i686 or pass dnf the --allowerasing option to remove gdb.i686 and install gdb.x8_64 . Users will no longer be able to install the gdb.i686 packages on 64-bit systems, that is, those with the libc.so.6()(64-bit) packages. Bugzilla:1853140 [1] libdwarf has been deprecated The libdwarf library has been deprecated in RHEL 8. The library will likely not be supported in future major releases. Instead, use the elfutils and libdw libraries for applications that wish to process ELF/DWARF files. Alternatives for the libdwarf-tools dwarfdump program are the binutils readelf program or the elfutils eu-readelf program, both used by passing the --debug-dump flag. Bugzilla:1920624 10.13. Identity Management openssh-ldap has been deprecated The openssh-ldap subpackage has been deprecated in Red Hat Enterprise Linux 8 and will be removed in RHEL 9. As the openssh-ldap subpackage is not maintained upstream, Red Hat recommends using SSSD and the sss_ssh_authorizedkeys helper, which integrate better with other IdM solutions and are more secure. By default, the SSSD ldap and ipa providers read the sshPublicKey LDAP attribute of the user object, if available. Note that you cannot use the default SSSD configuration for the ad provider or IdM trusted domains to retrieve SSH public keys from Active Directory (AD), since AD does not have a default LDAP attribute to store a public key. To allow the sss_ssh_authorizedkeys helper to get the key from SSSD, enable the ssh responder by adding ssh to the services option in the sssd.conf file. See the sssd.conf(5) man page for details. To allow sshd to use sss_ssh_authorizedkeys , add the AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys and AuthorizedKeysCommandUser nobody options to the /etc/ssh/sshd_config file as described by the sss_ssh_authorizedkeys(1) man page. Bugzilla:1871025 DES and 3DES encryption types have been removed Due to security reasons, the Data Encryption Standard (DES) algorithm has been deprecated and disabled by default since RHEL 7. With the recent rebase of Kerberos packages, single-DES (DES) and triple-DES (3DES) encryption types have been removed from RHEL 8. If you have configured services or users to only use DES or 3DES encryption, you might experience service interruptions such as: Kerberos authentication errors unknown enctype encryption errors Kerberos Distribution Centers (KDCs) with DES-encrypted Database Master Keys ( K/M ) fail to start Perform the following actions to prepare for the upgrade: Check if your KDC uses DES or 3DES encryption with the krb5check open source Python scripts. See krb5check on GitHub. If you are using DES or 3DES encryption with any Kerberos principals, re-key them with a supported encryption type, such as Advanced Encryption Standard (AES). For instructions on re-keying, see Retiring DES from MIT Kerberos Documentation. Test independence from DES and 3DES by temporarily setting the following Kerberos options before upgrading: In /var/kerberos/krb5kdc/kdc.conf on the KDC, set supported_enctypes and do not include des or des3 . For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set allow_weak_crypto to false . It is false by default. For every host, in /etc/krb5.conf and any files in /etc/krb5.conf.d , set permitted_enctypes , default_tgs_enctypes , and default_tkt_enctypes , and do not include des or des3 . If you do not experience any service interruptions with the test Kerberos settings from the step, remove them and upgrade. You do not need those settings after upgrading to the latest Kerberos packages. Bugzilla:1877991 The SSSD version of libwbclient has been removed The SSSD implementation of the libwbclient package was deprecated in RHEL 8.4. As it cannot be used with recent versions of Samba, the SSSD implementation of libwbclient has now been removed. Bugzilla:1947671 Standalone use of the ctdb service has been deprecated Since RHEL 8.4, customers are advised to use the ctdb clustered Samba service only when both of the following conditions apply: The ctdb service is managed as a pacemaker resource with the resource-agent ctdb . The ctdb service uses storage volumes that contain either a GlusterFS file system provided by the Red Hat Gluster Storage product or a GFS2 file system. The stand-alone use case of the ctdb service has been deprecated and will not be included in a major release of Red Hat Enterprise Linux. For further information on support policies for Samba, see the Knowledgebase article Support Policies for RHEL Resilient Storage - ctdb General Policies . Bugzilla:1916296 [1] Indirect AD integration with IdM via WinSync has been deprecated WinSync is no longer actively developed in RHEL 8 due to several functional limitations: WinSync supports only one Active Directory (AD) domain. Password synchronization requires installing additional software on AD Domain Controllers. For a more robust solution with better resource and security separation, Red Hat recommends using a cross-forest trust for indirect integration with Active Directory. See the Indirect integration documentation. Jira:RHELPLAN-100400 [1] Running Samba as a PDC or BDC is deprecated The classic domain controller mode that enabled administrators to run Samba as an NT4-like primary domain controller (PDC) and backup domain controller (BDC) is deprecated. The code and settings to configure these modes will be removed in a future Samba release. As long as the Samba version in RHEL 8 provides the PDC and BDC modes, Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. If you use the PDC to authenticate only Linux users, Red Hat suggests migrating to Red Hat Identity Management (IdM) that is included in RHEL subscriptions. However, you cannot join Windows systems to an IdM domain. Note that Red Hat continues supporting the PDC functionality IdM uses in the background. Red Hat does not support running Samba as an AD domain controller (DC). Bugzilla:1926114 The SMB1 protocol is deprecated in Samba Starting with Samba 4.11, the insecure Server Message Block version 1 (SMB1) protocol is deprecated and will be removed in a future release. To improve the security, by default, SMB1 is disabled in the Samba server and client utilities. Jira:RHELDOCS-16612 [1] Limited support for FreeRADIUS In RHEL 8, the following external authentication modules are deprecated as part of the FreeRADIUS offering: The MySQL, PostgreSQL, SQlite, and unixODBC database connectors The Perl language module The REST API module Note The PAM authentication module and other authentication modules that are provided as part of the base package are not affected. You can find replacements for the deprecated modules in community-supported packages, for example in the Fedora project. In addition, the scope of support for the freeradius package will be limited to the following use cases in future RHEL releases: Using FreeRADIUS as an authentication provider with Identity Management (IdM) as the backend source of authentication. The authentication occurs through the krb5 and LDAP authentication packages or as PAM authentication in the main FreeRADIUS package. Using FreeRADIUS to provide a source-of-truth for authentication in IdM, through the Python 3 authentication package. In contrast to these deprecations, Red Hat will strengthen the support of the following external authentication modules with FreeRADIUS: Authentication based on krb5 and LDAP Python 3 authentication The focus on these integration options is in close alignment with the strategic direction of Red Hat IdM. Jira:RHELDOCS-17573 [1] 10.14. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. Bugzilla:1607766 [1] LibreOffice is deprecated The LibreOffice RPM packages are now deprecated and will be removed in a future major RHEL release. LibreOffice continues to be fully supported through the entire life cycle of RHEL 7, 8, and 9. As a replacement for the RPM packages, Red Hat recommends that you install LibreOffice from either of the following sources provided by The Document Foundation: The official Flatpak package in the Flathub repository: https://flathub.org/apps/org.libreoffice.LibreOffice . The official RPM packages: https://www.libreoffice.org/download/download-libreoffice/ . Jira:RHELDOCS-16300 [1] 10.15. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. Bugzilla:1569610 [1] Motif has been deprecated The Motif widget toolkit has been deprecated in RHEL, because development in the upstream Motif community is inactive. The following Motif packages have been deprecated, including their development and debugging variants: motif openmotif openmotif21 openmotif22 Additionally, the motif-static package has been removed. Red Hat recommends using the GTK toolkit as a replacement. GTK is more maintainable and provides new features compared to Motif. Jira:RHELPLAN-98983 [1] 10.16. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. Bugzilla:1666722 The remotectl command is deprecated The remotectl command has been deprecated and will not be available in future releases of RHEL. You can use the cockpit-certificate-ensure command as a replacement. However, note that cockpit-certificate-ensure does not have feature parity with remotectl . It does not support bundled certificates and keychain files and requires them to be split out. Jira:RHELPLAN-147538 [1] 10.17. Red Hat Enterprise Linux system roles The geoipupdate package has been deprecated The geoipupdate package requires a third-party subscription and it also downloads proprietary content. Therefore, the geoipupdate package has been deprecated, and will be removed in the major RHEL version. Bugzilla:1874892 [1] The network system role displays a deprecation warning when configuring teams on RHEL 9 nodes The network teaming capabilities have been deprecated in RHEL 9. As a result, using the network RHEL system role on an RHEL 8 control node to configure a network team on RHEL 9 nodes, shows a warning about the deprecation. Bugzilla:2021685 Ansible Engine has been deprecated versions of RHEL 8 provided access to an Ansible Engine repository, with a limited scope of support, to enable supported RHEL Automation use cases, such as RHEL system roles and Insights remedations. Ansible Engine has been deprecated, and Ansible Engine 2.9 will have no support after September 29, 2023. For more details on the supported use cases, see Scope of support for the Ansible Core package included in the RHEL 9 AppStream . Users must manually migrate their systems from Ansible Engine to Ansible Core. For that, follow the steps: Procedure Check if the system is running RHEL 8.7 or a later release: Uninstall Ansible Engine 2.9: Disable the ansible-2-for-rhel-8-x86_64-rpms repository: Install the Ansible Core package from the RHEL 8 AppStream repository: For more details, see: Using Ansible in RHEL 8.6 and later . Bugzilla:2006081 10.18. Virtualization virsh iface-* commands have become deprecated The virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , are now deprecated, and will be removed in a future major version of RHEL. In addition, these commands frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications, such as nmcli . Bugzilla:1664592 [1] virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager might not be yet available in the RHEL web console. Jira:RHELPLAN-10304 [1] Limited support for virtual machine snapshots Creating snapshots of virtual machines (VMs) is currently only supported for VMs not using the UEFI firmware. In addition, during the snapshot operation, the QEMU monitor may become blocked, which negatively impacts the hypervisor performance for certain workloads. Also note that the current mechanism of creating VM snapshots has been deprecated, and Red Hat does not recommend using VM snapshots in a production environment. Bugzilla:1686057 The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga or virtio-vga devices instead of Cirrus VGA . Bugzilla:1651994 [1] SPICE has been deprecated The SPICE remote display protocol has become deprecated. As a result, SPICE will remain supported in RHEL 8, but Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. For advanced remote display functions, use third party tools such as RDP, HP RGS, or Mechdyne TGX. Note that the QXL graphics device, which is used by SPICE, has become deprecated as well. Bugzilla:1849563 [1] KVM on IBM POWER has been deprecated Using KVM virtualization on IBM POWER hardware has become deprecated. As a result, KVM on IBM POWER is still supported in RHEL 8, but will become unsupported in a future major release of RHEL. Jira:RHELPLAN-71200 [1] SecureBoot image verification using SHA1-based signatures is deprecated Performing SecureBoot image verification using SHA1-based signatures on UEFI (PE/COFF) executables has become deprecated. Instead, Red Hat recommends using signatures based on the SHA2 algorithm, or later. Bugzilla:1935497 [1] Using SPICE to attach smart card readers to virtual machines has been deprecated The SPICE remote display protocol has been deprecated in RHEL 8. Since the only recommended way to attach smart card readers to virtual machines (VMs) depends on the SPICE protocol, the usage of smart cards in VMs has also become deprecated in RHEL 8. In a future major version of RHEL, the functionality of attaching smart card readers to VMs will only be supported by third party remote visualization solutions. Bugzilla:2059626 RDMA-based live migration is deprecated With this update, migrating running virtual machines using Remote Direct Memory Access (RDMA) has become deprecated. As a result, it is still possible to use the rdma:// migration URI to request migration over RDMA, but this feature will become unsupported in a future major release of RHEL. Jira:RHELPLAN-153267 [1] 10.19. Containers The Podman varlink-based API v1.0 has been removed The Podman varlink-based API v1.0 was deprecated in a release of RHEL 8. Podman v2.0 introduced a new Podman v2.0 RESTful API. With the release of Podman v3.0, the varlink-based API v1.0 has been completely removed. Jira:RHELPLAN-45858 [1] container-tools:1.0 has been deprecated The container-tools:1.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:2.0 or container-tools:3.0 . Jira:RHELPLAN-59825 [1] The container-tools:2.0 module has been deprecated The container-tools:2.0 module has been deprecated and will no longer receive security updates. It is recommended to use a newer supported stable module stream, such as container-tools:3.0 . Jira:RHELPLAN-85066 [1] Flatpak images except GIMP has been deprecated The rhel8/firefox-flatpak , rhel8/thunderbird-flatpak , rhel8/inkscape-flatpak , and rhel8/libreoffice-flatpak RHEL 8 Flatpak Applications have been deprecated and replaced by the RHEL 9 versions. The rhel8/gimp-flatpak Flatpak Application is not deprecated because there is no replacement yet in RHEL 9. Bugzilla:2142499 The CNI network stack has been deprecated The Container Network Interface (CNI) network stack is deprecated and will be removed from Podman in a future minor release of RHEL. Previously, containers connected to the single Container Network Interface (CNI) plugin only via DNS. Podman v.4.0 introduced a new Netavark network stack. You can use the Netavark network stack with Podman and other Open Container Initiative (OCI) container management applications. The Netavark network stack for Podman is also compatible with advanced Docker functionalities. Containers in multiple networks can access containers on any of those networks. For more information, see Switching the network stack from CNI to Netavark . Jira:RHELDOCS-16755 [1] container-tools:3.0 has been deprecated The container-tools:3.0 module has been deprecated and will no longer receive security updates. To continue to build and run Linux Containers on RHEL, use a newer, stable, and supported module stream, such as container-tools:4.0 . For instructions on switching to a later stream, see Switching to a later stream . Jira:RHELPLAN-146398 [1] The Inkscape and LibreOffice Flatpak images are deprecated The rhel9/inkscape-flatpak and rhel9/libreoffice-flatpak Flatpak images, which are available as Technology Previews, have been deprecated. Red Hat recommends the following alternatives to these images: To replace rhel9/inkscape-flatpak , use the inkscape RPM package. To replace rhel9/libreoffice-flatpak , see the LibreOffice deprecation release note . Jira:RHELDOCS-17102 [1] 10.20. Deprecated packages This section lists packages that have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux. For changes to packages between RHEL 7 and RHEL 8, see Changes to packages in the Considerations in adopting RHEL 8 document. Important The support status of deprecated packages remains unchanged within RHEL 8. For more information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . The following packages have been deprecated in RHEL 8: 389-ds-base-legacy-tools abrt abrt-addon-ccpp abrt-addon-kerneloops abrt-addon-pstoreoops abrt-addon-vmcore abrt-addon-xorg abrt-cli abrt-console-notification abrt-dbus abrt-desktop abrt-gui abrt-gui-libs abrt-libs abrt-tui adobe-source-sans-pro-fonts adwaita-qt alsa-plugins-pulseaudio amanda amanda-client amanda-libs amanda-server ant-contrib antlr3 antlr32 aopalliance apache-commons-collections apache-commons-compress apache-commons-exec apache-commons-jxpath apache-commons-parent apache-ivy apache-parent apache-resource-bundles apache-sshd apiguardian aspnetcore-runtime-3.0 aspnetcore-runtime-3.1 aspnetcore-runtime-5.0 aspnetcore-targeting-pack-3.0 aspnetcore-targeting-pack-3.1 aspnetcore-targeting-pack-5.0 assertj-core authd auto autoconf213 autogen autogen-libopts awscli base64coder batik batik-css batik-util bea-stax bea-stax-api bind-export-devel bind-export-libs bind-libs-lite bind-pkcs11 bind-pkcs11-devel bind-pkcs11-libs bind-pkcs11-utils bind-sdb bind-sdb bind-sdb-chroot bluez-hid2hci boost-jam boost-signals bouncycastle bpg-algeti-fonts bpg-chveulebrivi-fonts bpg-classic-fonts bpg-courier-fonts bpg-courier-s-fonts bpg-dedaena-block-fonts bpg-dejavu-sans-fonts bpg-elite-fonts bpg-excelsior-caps-fonts bpg-excelsior-condenced-fonts bpg-excelsior-fonts bpg-fonts-common bpg-glaho-fonts bpg-gorda-fonts bpg-ingiri-fonts bpg-irubaqidze-fonts bpg-mikhail-stephan-fonts bpg-mrgvlovani-caps-fonts bpg-mrgvlovani-fonts bpg-nateli-caps-fonts bpg-nateli-condenced-fonts bpg-nateli-fonts bpg-nino-medium-cond-fonts bpg-nino-medium-fonts bpg-sans-fonts bpg-sans-medium-fonts bpg-sans-modern-fonts bpg-sans-regular-fonts bpg-serif-fonts bpg-serif-modern-fonts bpg-ucnobi-fonts brlapi-java bsh buildnumber-maven-plugin byaccj cal10n cbi-plugins cdparanoia cdparanoia-devel cdparanoia-libs cdrdao cmirror codehaus-parent codemodel compat-exiv2-026 compat-guile18 compat-hwloc1 compat-libpthread-nonshared compat-libtiff3 compat-openssl10 compat-sap-c++-11 compat-sap-c++-10 compat-sap-c++-9 createrepo_c-devel ctags ctags-etags custodia cyrus-imapd-vzic dbus-c++ dbus-c++-devel dbus-c++-glib dbxtool dhcp-libs directory-maven-plugin directory-maven-plugin-javadoc dirsplit dleyna-connector-dbus dleyna-core dleyna-renderer dleyna-server dnssec-trigger dnssec-trigger-panel dotnet-apphost-pack-3.0 dotnet-apphost-pack-3.1 dotnet-apphost-pack-5.0 dotnet-host-fxr-2.1 dotnet-host-fxr-2.1 dotnet-hostfxr-3.0 dotnet-hostfxr-3.1 dotnet-hostfxr-5.0 dotnet-runtime-2.1 dotnet-runtime-3.0 dotnet-runtime-3.1 dotnet-runtime-5.0 dotnet-sdk-2.1 dotnet-sdk-2.1.5xx dotnet-sdk-3.0 dotnet-sdk-3.1 dotnet-sdk-5.0 dotnet-targeting-pack-3.0 dotnet-targeting-pack-3.1 dotnet-targeting-pack-5.0 dotnet-templates-3.0 dotnet-templates-3.1 dotnet-templates-5.0 dotnet5.0-build-reference-packages dptfxtract drpm drpm-devel dump dvd+rw-tools dyninst-static eclipse-ecf eclipse-ecf-core eclipse-ecf-runtime eclipse-emf eclipse-emf-core eclipse-emf-runtime eclipse-emf-xsd eclipse-equinox-osgi eclipse-jdt eclipse-license eclipse-p2-discovery eclipse-pde eclipse-platform eclipse-swt ed25519-java ee4j-parent elfutils-devel-static elfutils-libelf-devel-static enca enca-devel environment-modules-compat evince-browser-plugin exec-maven-plugin farstream02 felix-gogo-command felix-gogo-runtime felix-gogo-shell felix-scr felix-osgi-compendium felix-osgi-core felix-osgi-foundation felix-parent file-roller fipscheck fipscheck-devel fipscheck-lib firewire fonts-tweak-tool forge-parent freeradius-mysql freeradius-perl freeradius-postgresql freeradius-rest freeradius-sqlite freeradius-unixODBC fuse-sshfs fusesource-pom future gamin gamin-devel gavl gcc-toolset-9 gcc-toolset-9-annobin gcc-toolset-9-build gcc-toolset-9-perftools gcc-toolset-9-runtime gcc-toolset-9-toolchain gcc-toolset-10 gcc-toolset-10-annobin gcc-toolset-10-binutils gcc-toolset-10-binutils-devel gcc-toolset-10-build gcc-toolset-10-dwz gcc-toolset-10-dyninst gcc-toolset-10-dyninst-devel gcc-toolset-10-elfutils gcc-toolset-10-elfutils-debuginfod-client gcc-toolset-10-elfutils-debuginfod-client-devel gcc-toolset-10-elfutils-devel gcc-toolset-10-elfutils-libelf gcc-toolset-10-elfutils-libelf-devel gcc-toolset-10-elfutils-libs gcc-toolset-10-gcc gcc-toolset-10-gcc-c++ gcc-toolset-10-gcc-gdb-plugin gcc-toolset-10-gcc-gfortran gcc-toolset-10-gdb gcc-toolset-10-gdb-doc gcc-toolset-10-gdb-gdbserver gcc-toolset-10-libasan-devel gcc-toolset-10-libatomic-devel gcc-toolset-10-libitm-devel gcc-toolset-10-liblsan-devel gcc-toolset-10-libquadmath-devel gcc-toolset-10-libstdc++-devel gcc-toolset-10-libstdc++-docs gcc-toolset-10-libtsan-devel gcc-toolset-10-libubsan-devel gcc-toolset-10-ltrace gcc-toolset-10-make gcc-toolset-10-make-devel gcc-toolset-10-perftools gcc-toolset-10-runtime gcc-toolset-10-strace gcc-toolset-10-systemtap gcc-toolset-10-systemtap-client gcc-toolset-10-systemtap-devel gcc-toolset-10-systemtap-initscript gcc-toolset-10-systemtap-runtime gcc-toolset-10-systemtap-sdt-devel gcc-toolset-10-systemtap-server gcc-toolset-10-toolchain gcc-toolset-10-valgrind gcc-toolset-10-valgrind-devel gcc-toolset-11-make-devel GConf2 GConf2-devel gegl genisoimage genwqe-tools genwqe-vpd genwqe-zlib genwqe-zlib-devel geoipupdate geronimo-annotation geronimo-jms geronimo-jpa geronimo-parent-poms gfbgraph gflags gflags-devel glassfish-annotation-api glassfish-el glassfish-fastinfoset glassfish-jaxb-core glassfish-jaxb-txw2 glassfish-jsp glassfish-jsp-api glassfish-legal glassfish-master-pom glassfish-servlet-api glew-devel glib2-fam glog glog-devel gmock gmock-devel gnome-abrt gnome-boxes gnome-menus-devel gnome-online-miners gnome-shell-extension-disable-screenshield gnome-shell-extension-horizontal-workspaces gnome-shell-extension-no-hot-corner gnome-shell-extension-window-grouper gnome-themes-standard gnu-free-fonts-common gnu-free-mono-fonts gnu-free-sans-fonts gnu-free-serif-fonts gnupg2-smime gnuplot gnuplot-common gobject-introspection-devel google-gson google-noto-sans-syriac-eastern-fonts google-noto-sans-syriac-estrangela-fonts google-noto-sans-syriac-western-fonts google-noto-sans-tibetan-fonts google-noto-sans-ui-fonts gphoto2 gsl-devel gssntlmssp gtest gtest-devel gtkmm24 gtkmm24-devel gtkmm24-docs gtksourceview3 gtksourceview3-devel gtkspell gtkspell-devel gtkspell3 guile gutenprint-gimp gutenprint-libs-ui gvfs-afc gvfs-afp gvfs-archive hamcrest-core hawtjni hawtjni hawtjni-runtime HdrHistogram HdrHistogram-javadoc highlight-gui hivex-devel hostname hplip-gui httpcomponents-project hwloc-plugins hyphen-fo hyphen-grc hyphen-hsb hyphen-ia hyphen-is hyphen-ku hyphen-mi hyphen-mn hyphen-sa hyphen-tk ibus-sayura icedax icu4j idm-console-framework inkscape inkscape-docs inkscape-view iptables ipython isl isl-devel isorelax istack-commons-runtime istack-commons-tools iwl3945-firmware iwl4965-firmware iwl6000-firmware jacoco jaf jaf-javadoc jakarta-oro janino jansi-native jarjar java-1.8.0-ibm java-1.8.0-ibm-demo java-1.8.0-ibm-devel java-1.8.0-ibm-headless java-1.8.0-ibm-jdbc java-1.8.0-ibm-plugin java-1.8.0-ibm-src java-1.8.0-ibm-webstart java-1.8.0-openjdk-accessibility java-1.8.0-openjdk-accessibility-slowdebug java_cup java-atk-wrapper javacc javacc-maven-plugin javaewah javaparser javapoet javassist javassist-javadoc jaxen jboss-annotations-1.2-api jboss-interceptors-1.2-api jboss-logmanager jboss-parent jctools jdepend jdependency jdom jdom2 jetty jetty-continuation jetty-http jetty-io jetty-security jetty-server jetty-servlet jetty-util jffi jflex jgit jline jmc jnr-netdb jolokia-jvm-agent js-uglify jsch json_simple jss-javadoc jtidy junit5 jvnet-parent jzlib kernel-cross-headers ksc kurdit-unikurd-web-fonts kyotocabinet-libs ldapjdk-javadoc lensfun lensfun-devel lftp-scripts libaec libaec-devel libappindicator-gtk3 libappindicator-gtk3-devel libatomic-static libavc1394 libblocksruntime libcacard libcacard-devel libcgroup libcgroup-tools libchamplain libchamplain-devel libchamplain-gtk libcroco libcroco-devel libcxl libcxl-devel libdap libdap-devel libdazzle-devel libdbusmenu libdbusmenu-devel libdbusmenu-doc libdbusmenu-gtk3 libdbusmenu-gtk3-devel libdc1394 libdnet libdnet-devel libdv libdwarf libdwarf-devel libdwarf-static libdwarf-tools libeasyfc libeasyfc-gobject libepubgen-devel libertas-sd8686-firmware libertas-usb8388-firmware libertas-usb8388-olpc-firmware libgdither libGLEW libgovirt libguestfs-benchmarking libguestfs-devel libguestfs-gfs2 libguestfs-gobject libguestfs-gobject-devel libguestfs-java libguestfs-java-devel libguestfs-javadoc libguestfs-man-pages-ja libguestfs-man-pages-uk libguestfs-tools libguestfs-tools-c libhugetlbfs libhugetlbfs-devel libhugetlbfs-utils libIDL libIDL-devel libidn libiec61883 libindicator-gtk3 libindicator-gtk3-devel libiscsi-devel libjose-devel libkkc libkkc-common libkkc-data libldb-devel liblogging libluksmeta-devel libmalaga libmcpp libmemcached libmemcached-libs libmetalink libmodulemd1 libmongocrypt libmtp-devel libmusicbrainz5 libmusicbrainz5-devel libnbd-devel liboauth liboauth-devel libpfm-static libpng12 libpurple libpurple-devel libraw1394 libreport-plugin-mailx libreport-plugin-rhtsupport libreport-plugin-ureport libreport-rhel libreport-rhel-bugzilla librpmem librpmem-debug librpmem-devel libsass libsass-devel libselinux-python libsqlite3x libtalloc-devel libtar libtdb-devel libtevent-devel libtpms-devel libunwind libusal libvarlink libverto-libevent libvirt-admin libvirt-bash-completion libvirt-daemon-driver-storage-gluster libvirt-daemon-driver-storage-iscsi-direct libvirt-devel libvirt-docs libvirt-gconfig libvirt-gobject libvirt-lock-sanlock libvirt-wireshark libvmem libvmem-debug libvmem-devel libvmmalloc libvmmalloc-debug libvmmalloc-devel libvncserver libwinpr-devel libwmf libwmf-devel libwmf-lite libXNVCtrl libyami log4j12 log4j12-javadoc lohit-malayalam-fonts lohit-nepali-fonts lorax-composer lua-guestfs lucene lucene-analysis lucene-analyzers-smartcn lucene-queries lucene-queryparser lucene-sandbox lz4-java lz4-java-javadoc mailman mailx make-devel malaga malaga-suomi-voikko marisa maven-antrun-plugin maven-assembly-plugin maven-clean-plugin maven-dependency-analyzer maven-dependency-plugin maven-doxia maven-doxia-sitetools maven-install-plugin maven-invoker maven-invoker-plugin maven-parent maven-plugins-pom maven-reporting-api maven-reporting-impl maven-resolver-api maven-resolver-connector-basic maven-resolver-impl maven-resolver-spi maven-resolver-transport-wagon maven-resolver-util maven-scm maven-script-interpreter maven-shade-plugin maven-shared maven-verifier maven-wagon-file maven-wagon-http maven-wagon-http-shared maven-wagon-provider-api maven2 meanwhile mercurial mercurial-hgk metis metis-devel mingw32-bzip2 mingw32-bzip2-static mingw32-cairo mingw32-expat mingw32-fontconfig mingw32-freetype mingw32-freetype-static mingw32-gstreamer1 mingw32-harfbuzz mingw32-harfbuzz-static mingw32-icu mingw32-libjpeg-turbo mingw32-libjpeg-turbo-static mingw32-libpng mingw32-libpng-static mingw32-libtiff mingw32-libtiff-static mingw32-openssl mingw32-readline mingw32-sqlite mingw32-sqlite-static mingw64-adwaita-icon-theme mingw64-bzip2 mingw64-bzip2-static mingw64-cairo mingw64-expat mingw64-fontconfig mingw64-freetype mingw64-freetype-static mingw64-gstreamer1 mingw64-harfbuzz mingw64-harfbuzz-static mingw64-icu mingw64-libjpeg-turbo mingw64-libjpeg-turbo-static mingw64-libpng mingw64-libpng-static mingw64-libtiff mingw64-libtiff-static mingw64-nettle mingw64-openssl mingw64-readline mingw64-sqlite mingw64-sqlite-static modello mojo-parent mongo-c-driver mousetweaks mozjs52 mozjs52-devel mozjs60 mozjs60-devel mozvoikko msv-javadoc msv-manual munge-maven-plugin mythes-mi mythes-ne nafees-web-naskh-fonts nbd nbdkit-devel nbdkit-example-plugins nbdkit-gzip-plugin nbdkit-plugin-python-common nbdkit-plugin-vddk ncompress ncurses-compat-libs net-tools netcf netcf-devel netcf-libs network-scripts network-scripts-ppp nkf nodejs-devel nodejs-packaging nss_nis nss-pam-ldapd objectweb-asm objectweb-asm-javadoc objectweb-pom ocaml-bisect-ppx ocaml-camlp4 ocaml-camlp4-devel ocaml-lwt ocaml-mmap ocaml-ocplib-endian ocaml-ounit ocaml-result ocaml-seq opencryptoki-tpmtok opencv-contrib opencv-core opencv-devel openhpi openhpi-libs OpenIPMI-perl openssh-cavs openssh-ldap openssl-ibmpkcs11 opentest4j os-maven-plugin pakchois pandoc paps-libs paranamer parfait parfait-examples parfait-javadoc pcp-parfait-agent pcp-pmda-rpm pcp-pmda-vmware pcsc-lite-doc peripety perl-B-Debug perl-B-Lint perl-Class-Factory-Util perl-Class-ISA perl-DateTime-Format-HTTP perl-DateTime-Format-Mail perl-File-CheckTree perl-homedir perl-libxml-perl perl-Locale-Codes perl-Mozilla-LDAP perl-NKF perl-Object-HashBase-tools perl-Package-DeprecationManager perl-Pod-LaTeX perl-Pod-Plainer perl-prefork perl-String-CRC32 perl-SUPER perl-Sys-Virt perl-tests perl-YAML-Syck phodav php-recode php-xmlrpc pidgin pidgin-devel pidgin-sipe pinentry-emacs pinentry-gtk pipewire0.2-devel pipewire0.2-libs platform-python-coverage plexus-ant-factory plexus-bsh-factory plexus-cli plexus-component-api plexus-component-factories-pom plexus-components-pom plexus-i18n plexus-interactivity plexus-pom plexus-velocity plymouth-plugin-throbgress pmreorder postgresql-test-rpm-macros powermock prometheus-jmx-exporter prometheus-jmx-exporter-openjdk11 ptscotch-mpich ptscotch-mpich-devel ptscotch-mpich-devel-parmetis ptscotch-openmpi ptscotch-openmpi-devel purple-sipe pygobject2-doc pygtk2 pygtk2-codegen pygtk2-devel pygtk2-doc python-nose-docs python-nss-doc python-podman-api python-psycopg2-doc python-pymongo-doc python-redis python-schedutils python-slip python-sqlalchemy-doc python-varlink python-virtualenv-doc python2-backports python2-backports-ssl_match_hostname python2-bson python2-coverage python2-docs python2-docs-info python2-funcsigs python2-ipaddress python2-mock python2-nose python2-numpy-doc python2-psycopg2-debug python2-psycopg2-tests python2-pymongo python2-pymongo-gridfs python2-pytest-mock python2-sqlalchemy python2-tools python2-virtualenv python3-bson python3-click python3-coverage python3-cpio python3-custodia python3-docs python3-flask python3-gevent python3-gobject-base python3-hivex python3-html5lib python3-hypothesis python3-ipatests python3-itsdangerous python3-jwt python3-libguestfs python3-mock python3-networkx-core python3-nose python3-nss python3-openipmi python3-pillow python3-ptyprocess python3-pydbus python3-pymongo python3-pymongo-gridfs python3-pyOpenSSL python3-pytoml python3-reportlab python3-schedutils python3-scons python3-semantic_version python3-slip python3-slip-dbus python3-sqlalchemy python3-syspurpose python3-virtualenv python3-webencodings python3-werkzeug python38-asn1crypto python38-numpy-doc python38-psycopg2-doc python38-psycopg2-tests python39-numpy-doc python39-psycopg2-doc python39-psycopg2-tests qemu-kvm-block-gluster qemu-kvm-block-iscsi qemu-kvm-block-ssh qemu-kvm-hw-usbredir qemu-kvm-device-display-virtio-gpu-gl qemu-kvm-device-display-virtio-gpu-pci-gl qemu-kvm-device-display-virtio-vga-gl qemu-kvm-tests qpdf qpdf-doc qpid-proton qrencode qrencode-devel qrencode-libs qt5-qtcanvas3d qt5-qtcanvas3d-examples rarian rarian-compat re2c recode redhat-lsb redhat-lsb-core redhat-lsb-cxx redhat-lsb-desktop redhat-lsb-languages redhat-lsb-printing redhat-lsb-submod-multimedia redhat-lsb-submod-security redhat-lsb-supplemental redhat-lsb-trialuse redhat-menus redhat-support-lib-python redhat-support-tool reflections regexp relaxngDatatype rhsm-gtk rpm-plugin-prioreset rpmemd rsyslog-udpspoof ruby-hivex ruby-libguestfs rubygem-abrt rubygem-abrt-doc rubygem-bson rubygem-bson-doc rubygem-bundler-doc rubygem-mongo rubygem-mongo-doc rubygem-net-telnet rubygem-xmlrpc s390utils-cmsfs samba-pidl samba-test samba-test-libs samyak-devanagari-fonts samyak-fonts-common samyak-gujarati-fonts samyak-malayalam-fonts samyak-odia-fonts samyak-tamil-fonts sane-frontends sanlk-reset sat4j scala scotch scotch-devel SDL_sound selinux-policy-minimum sendmail sgabios sgabios-bin shrinkwrap sisu-inject sisu-mojos sisu-plexus skkdic SLOF smc-anjalioldlipi-fonts smc-dyuthi-fonts smc-fonts-common smc-kalyani-fonts smc-raghumalayalam-fonts smc-suruma-fonts softhsm-devel sonatype-oss-parent sonatype-plugins-parent sos-collector sparsehash-devel spax spec-version-maven-plugin spice spice-client-win-x64 spice-client-win-x86 spice-glib spice-glib-devel spice-gtk spice-gtk-tools spice-gtk3 spice-gtk3-devel spice-gtk3-vala spice-parent spice-protocol spice-qxl-wddm-dod spice-server spice-server-devel spice-qxl-xddm spice-server spice-streaming-agent spice-vdagent-win-x64 spice-vdagent-win-x86 sssd-libwbclient star stax-ex stax2-api stringtemplate stringtemplate4 subscription-manager-initial-setup-addon subscription-manager-migration subscription-manager-migration-data subversion-javahl SuperLU SuperLU-devel supermin-devel swig swig-doc swig-gdb swtpm-devel swtpm-tools-pkcs11 system-storage-manager tcl-brlapi testng tibetan-machine-uni-fonts timedatex tpm-quote-tools tpm-tools tpm-tools-pkcs11 treelayout trousers trousers-lib tuned-profiles-compat tuned-profiles-nfv-host-bin tuned-utils-systemtap tycho uglify-js unbound-devel univocity-output-tester univocity-parsers usbguard-notifier usbredir-devel utf8cpp uthash velocity vinagre vino virt-dib virt-p2v-maker vm-dump-metrics-devel weld-parent wodim woodstox-core wqy-microhei-fonts wqy-unibit-fonts xdelta xmlgraphics-commons xmlstreambuffer xinetd xorg-x11-apps xorg-x11-drv-qxl xorg-x11-server-Xspice xpp3 xsane-gimp xsom xz-java xz-java-javadoc yajl-devel yp-tools ypbind ypserv 10.21. Deprecated and unmaintained devices This section lists devices (drivers, adapters) that continue to be supported until the end of life of RHEL 8 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Support for devices other than those listed remains unchanged. These are deprecated devices. are available but are no longer being tested or updated on a routine basis in RHEL 8. Red Hat may fix serious bugs, including security bugs, at its discretion. These devices should no longer be used in production, and it is likely they will be disabled in the major release. These are unmaintained devices. PCI device IDs are in the format of vendor:device:subvendor:subdevice . If no device ID is listed, all devices associated with the corresponding driver have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. Table 10.1. Deprecated devices Device ID Driver Device name bnx2 QLogic BCM5706/5708/5709/5716 Driver hpsa Hewlett-Packard Company: Smart Array Controllers 0x10df:0x0724 lpfc Emulex Corporation: OneConnect FCoE Initiator (Skyhawk) 0x10df:0xe200 lpfc Emulex Corporation: LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter 0x10df:0xf011 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf015 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf100 lpfc Emulex Corporation: LPe12000 Series 8Gb Fibre Channel Adapter 0x10df:0xfc40 lpfc Emulex Corporation: Saturn-X: LightPulse Fibre Channel Host Adapter 0x10df:0xe220 be2net Emulex Corporation: OneConnect NIC (Lancer) 0x1000:0x005b megaraid_sas Broadcom / LSI: MegaRAID SAS 2208 [Thunderbolt] 0x1000:0x006E mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0080 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0081 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0082 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0083 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0084 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0085 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0086 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0087 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 myri10ge Myricom 10G driver (10GbE) netxen_nic QLogic/NetXen (1/10) GbE Intelligent Ethernet Driver 0x1077:0x2031 qla2xxx QLogic Corp.: ISP8324-based 16Gb Fibre Channel to PCI Express Adapter 0x1077:0x2532 qla2xxx QLogic Corp.: ISP2532-based 8Gb Fibre Channel to PCI Express HBA 0x1077:0x8031 qla2xxx QLogic Corp.: 8300 Series 10GbE Converged Network Adapter (FCoE) qla3xxx QLogic ISP3XXX Network Driver v2.03.00-k5 0x1924:0x0803 sfc Solarflare Communications: SFC9020 10G Ethernet Controller 0x1924:0x0813 sfc Solarflare Communications: SFL9021 10GBASE-T Ethernet Controller Soft-RoCE (rdma_rxe) HNS-RoCE HNS GE/10GE/25GE/50GE/100GE RDMA Network Controller liquidio Cavium LiquidIO Intelligent Server Adapter Driver liquidio_vf Cavium LiquidIO Intelligent Server Adapter Virtual Function Driver Table 10.2. Unmaintained devices Device ID Driver Device name e1000 Intel(R) PRO/1000 Network Driver mptbase Fusion MPT SAS Host driver mptsas Fusion MPT SAS Host driver mptscsih Fusion MPT SCSI Host driver mptspi Fusion MPT SAS Host driver 0x1000:0x0071 [a] megaraid_sas Broadcom / LSI: MR SAS HBA 2004 0x1000:0x0073 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2008 [Falcon] 0x1000:0x0079 [a] megaraid_sas Broadcom / LSI: MegaRAID SAS 2108 [Liberator] nvmet_tcp NVMe/TCP target driver nvmet-fc NVMe/Fabrics FC target driver [a] Disabled in RHEL 8.0, re-enabled in RHEL 8.4 due to customer requests.
[ "update-crypto-policies --set LEGACY", "Token authentication not supported by the entitlement server", "yum install network-scripts", "cat /etc/redhat-release", "yum remove ansible", "subscription-manager repos --disable ansible-2-for-rhel-8-x86_64-rpms", "yum install ansible-core" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/deprecated-functionality
Installing on OpenStack
Installing on OpenStack OpenShift Container Platform 4.16 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/index
10.12. Fencing Occurs at Random
10.12. Fencing Occurs at Random If you find that a node is being fenced at random, check for the following conditions. The root cause of fences is always a node losing token, meaning that it lost communication with the rest of the cluster and stopped returning heartbeat. Any situation that results in a system not returning heartbeat within the specified token interval could lead to a fence. By default the token interval is 10 seconds. It can be specified by adding the desired value (in milliseconds) to the token parameter of the totem tag in the cluster.conf file (for example, setting totem token="30000" for 30 seconds). Ensure that the network is sound and working as expected. Ensure that the interfaces the cluster uses for inter-node communication are not using any bonding mode other than 0, 1, or 2. (Bonding modes 0 and 2 are supported as of Red Hat Enterprise Linux 6.4.) Take measures to determine if the system is "freezing" or kernel panicking. Set up the kdump utility and see if you get a core during one of these fences. Make sure some situation is not arising that you are wrongly attributing to a fence, for example the quorum disk ejecting a node due to a storage failure or a third party product like Oracle RAC rebooting a node due to some outside condition. The messages logs are often very helpful in determining such problems. Whenever fences or node reboots occur it should be standard practice to inspect the messages logs of all nodes in the cluster from the time the reboot/fence occurred. Thoroughly inspect the system for hardware faults that may lead to the system not responding to heartbeat when expected.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-randomfence-ca
Integrating RHEL systems directly with Windows Active Directory
Integrating RHEL systems directly with Windows Active Directory Red Hat Enterprise Linux 9 Joining RHEL hosts to AD and accessing resources in AD Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/integrating_rhel_systems_directly_with_windows_active_directory/index
Chapter 1. Preparing to install on Azure Stack Hub
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
Red Hat Enterprise Linux Software Certification Policy Guide
Red Hat Enterprise Linux Software Certification Policy Guide Red Hat Software Certification 2025 For Use with Red Hat Enterprise Linux Software Certification Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_enterprise_linux_software_certification_policy_guide/index
5.2. Network Tuning Tips
5.2. Network Tuning Tips Use multiple networks to avoid congestion on a single network. For example, have dedicated networks for management, backups and/or live migration. Usually, matching the default MTU (1500 bytes) in all components is sufficient. If you require larger messages, increasing the MTU value can reduce fragmentation. If you change the MTU, all devices in the path should have a matching MTU value. Use arp_filter to prevent ARP Flux, an undesirable condition that can occur in both hosts and guests and is caused by the machine responding to ARP requests from more than one network interface: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter or edit /etc/sysctl.conf to make this setting persistent. Note Refer to the following URL for more information on ARP Flux: http://linux-ip.net/html/ether-arp.html#ether-arp-flux
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-networking-general_tips
Installing Satellite Server in a disconnected network environment
Installing Satellite Server in a disconnected network environment Red Hat Satellite 6.16 Install and configure Satellite Server in a network without Internet access Red Hat Satellite Documentation Team [email protected]
[ "nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2", "restorecon -R /var/lib/pulp", "firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"", "firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all", "ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com", "ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms", "hostnamectl set-hostname name", "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium", "scp localfile username@hostname:remotefile", "mkdir /media/rhel", "mount -o loop rhel-DVD .iso /media/rhel", "cp /media/rhel/media.repo /etc/yum.repos.d/rhel.repo chmod u+w /etc/yum.repos.d/rhel.repo", "[RHEL-BaseOS] name=Red Hat Enterprise Linux BaseOS mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel/BaseOS/ [RHEL-AppStream] name=Red Hat Enterprise Linux Appstream mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel/AppStream/", "yum repolist", "mkdir /media/sat6", "mount -o loop sat6-DVD .iso /media/sat6", "dnf install fapolicyd", "satellite-maintain packages install fapolicyd", "systemctl enable --now fapolicyd", "systemctl status fapolicyd", "findmnt -t iso9660", "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "dnf upgrade", "cd /media/sat6/", "./install_packages", "cd /path-to-package/", "dnf install package_name", "cd /media/sat6/", "./install_packages", "satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password", "umount /media/sat6 umount /media/rhel8", "hammer settings set --name subscription_connection_enabled --value false", "scp ~/ manifest_file .zip root@ satellite.example.com :~/.", "hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"", "hammer organization configure-cdn --name=\" My_Organization \" --type=custom_cdn --url https:// my-cdn.example.com --ssl-ca-credential-id \" My_CDN_CA_Cert_ID \"", "hammer organization configure-cdn --name=\" My_Organization \" --type=export_sync", "hammer content-credential show --name=\" My_Upstream_CA_Cert \" --organization=\" My_Downstream_Organization \"", "hammer organization configure-cdn --name=\" My_Downstream_Organization \" --type=network_sync --url https:// upstream-satellite.example.com --username upstream_username --password upstream_password --ssl-ca-credential-id \" My_Upstream_CA_Cert_ID\" \\ --upstream-organization-label=\"_My_Upstream_Organization \" [--upstream-lifecycle-environment-label=\" My_Lifecycle_Environment \"] [--upstream-content-view-label=\" My_Content_View \"]", "satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt", "firewall-cmd --add-service=mqtt", "firewall-cmd --runtime-to-permanent", "satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"", "satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3", "satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false", "Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0", "cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust", "mkdir /root/satellite_cert", "openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096", "[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com", "[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4", "openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3", "katello-certs-check -c /root/satellite_cert/satellite_cert.pem \\ 1 -k /root/satellite_cert/satellite_cert_key.pem \\ 2 -b /root/satellite_cert/ca_cert_bundle.pem 3", "Validation succeeded. To install the Red Hat Satellite Server with the custom certificates, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" To update the certificates on a currently running Red Hat Satellite installation, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca", "dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms", "dnf repolist enabled", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "dnf module enable satellite:el8", "dnf repolist enabled", "dnf install postgresql-server postgresql-evr postgresql-contrib", "postgresql-setup initdb", "vi /var/lib/pgsql/data/postgresql.conf", "listen_addresses = '*'", "password_encryption=scram-sha-256", "vi /var/lib/pgsql/data/pg_hba.conf", "host all all Satellite_ip /32 scram-sha-256", "systemctl enable --now postgresql", "firewall-cmd --add-service=postgresql", "firewall-cmd --runtime-to-permanent", "su - postgres -c psql", "CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;", "postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".", "pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION", "\\q", "PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"", "satellite-installer --katello-candlepin-manage-db false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-user candlepin --katello-candlepin-db-password Candlepin_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-user pulp --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-db-manage false --foreman-db-host postgres.example.com --foreman-db-database foreman --foreman-db-username foreman --foreman-db-password Foreman_Password", "--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true", "scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key", "restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key", "echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key", "satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key", "dnf install dhcp-server bind-utils", "tsig-keygen -a hmac-md5 omapi_key", "cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;", "firewall-cmd --add-service dhcp", "firewall-cmd --runtime-to-permanent", "id -u foreman 993 id -g foreman 990", "groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman", "chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf", "systemctl enable --now dhcpd", "dnf install nfs-utils systemctl enable --now nfs-server", "mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp", "/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0", "mount -a", "/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)", "exportfs -rva", "firewall-cmd --add-port=7911/tcp", "firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public", "firewall-cmd --runtime-to-permanent", "satellite-maintain packages install nfs-utils", "mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd", "chown -R foreman-proxy /mnt/nfs", "showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN", "DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0", "mount -a", "su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit", "satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911", "update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract", "curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network", "[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]", "satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-username admin --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default", "satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-username admin --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-dns-view default", "mkdir -p /mnt/nfs/var/lib/tftpboot", "TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0", "mount -a", "satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true", "satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN", "kinit idm_user", "ipa service-add capsule/satellite.example.com", "satellite-maintain packages install ipa-client", "ipa-client-install", "kinit admin", "rm /etc/foreman-proxy/dns.keytab", "ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab", "chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab", "kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]", "grant capsule\\047 [email protected] wildcard * ANY;", "grant capsule\\047 [email protected] wildcard * ANY;", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true", "######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################", "systemctl reload named", "grant \"rndc-key\" zonesub ANY;", "scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key", "restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key", "usermod -a -G named foreman-proxy", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key", "key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };", "echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20", "echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "satellite-installer", "satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true", "satellite-maintain packages install ipa-client", "ipa-client-install", "foreman-prepare-realm admin realm-capsule", "scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab", "mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab", "satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa", "cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust", "systemctl restart foreman-proxy", "ipa hostgroup-add hostgroup_name --desc= hostgroup_description", "ipa automember-add --type=hostgroup hostgroup_name automember_rule", "ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------", "apache::server_tokens: Prod", "apache::server_signature: Off", "cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup", "journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1", "puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1", "hammer organization configure-cdn --name=\" My_Organization \" --type=redhat_cdn" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/installing_satellite_server_in_a_disconnected_network_environment/index
Chapter 5. Getting Started with Virtual Machine Manager
Chapter 5. Getting Started with Virtual Machine Manager The Virtual Machine Manager, also known as virt-manager , is a graphical tool for creating and managing guest virtual machines. This chapter provides a description of the Virtual Machine Manager and how to run it. Note You can only run the Virtual Machine Manager on a system that has a graphical interface. For more detailed information about using the Virtual Machine Manager, see the other Red Hat Enterprise Linux virtualization guides . 5.1. Running Virtual Machine Manager To run the Virtual Machine Manager, select it in the list of applications or use the following command: The Virtual Machine Manager opens to the main window. Figure 5.1. The Virtual Machine Manager Note If running virt-manager fails, ensure that the virt-manager package is installed. For information on installing the virt-manager package, see Installing the Virtualization Packages in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.
[ "virt-manager" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-virtualization_manager-introduction
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere/providing-feedback-on-red-hat-documentation_rhodf
Chapter 3. Preparing Storage for Red Hat Virtualization
Chapter 3. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Prerequisites Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Manager virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation. Warning Extending or otherwise changing the self-hosted engine storage domain after deployment of the self-hosted engine is not supported. Any such change might prevent the self-hosted engine from booting. When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine. If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target. It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 3.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 3.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 3.4. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 3.5. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf . Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf . Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 3.6. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
[ "dnf install nfs-utils -y", "cat /proc/fs/nfsd/versions", "systemctl enable nfs-server systemctl enable rpcbind", "groupadd kvm -g 36", "useradd vdsm -u 36 -g kvm", "mkdir /storage chmod 0755 /storage chown 36:36 /storage/", "vi /etc/exports cat /etc/exports /storage *(rw)", "systemctl restart rpcbind systemctl restart nfs-server", "exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }", "vdsm-tool is-configured --module multipath", "systemctl reload multipathd" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/preparing_storage_for_rhv_she_cli_deploy
Hardening Red Hat OpenStack Platform
Hardening Red Hat OpenStack Platform Red Hat OpenStack Platform 17.1 Good Practices, Compliance, and Security Hardening OpenStack Documentation Team [email protected]
[ "openstack network create internal-network", "openstack network create internal-network --project testing", "source /home/stack/stackrc", "openstack subnet list -c Name -c Subnet +---------------------+------------------+ | Name | Subnet | +---------------------+------------------+ | ctlplane-subnet | 192.168.101.0/24 | | storage_mgmt_subnet | 172.16.105.0/24 | | tenant_subnet | 172.16.102.0/24 | | external_subnet | 10.94.81.0/24 | | internal_api_subnet | 172.16.103.0/24 | | storage_subnet | 172.16.104.0/24 | +---------------------+------------------+", "openstack server list -c Name -c Networks +-------------------------+-------------------------+ | Name | Networks | +-------------------------+-------------------------+ | overcloud-controller-0 | ctlplane=192.168.101.15 | | overcloud-controller-1 | ctlplane=192.168.101.19 | | overcloud-controller-2 | ctlplane=192.168.101.14 | | overcloud-novacompute-0 | ctlplane=192.168.101.18 | | overcloud-novacompute-2 | ctlplane=192.168.101.17 | | overcloud-novacompute-1 | ctlplane=192.168.101.11 | +-------------------------+-------------------------+", "ssh [email protected] ip addr", "cp overcloud-deploy/overcloud/overcloud-passwords.yaml overcloud-deploy/overcloud/overcloud-passwords.yaml.old", "ansible-playbook -i ./tripleo-deploy/undercloud/tripleo-ansible-inventory.yaml /usr/share/ansible/tripleo-playbooks/rotate-passwords.yaml", "[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details PLAY [Rotate passwords] ************************************************************************************************************************************************************************************ TASK [Set passwords environment file path] ***************************************************************************************************************************************************************** ok: [undercloud-0] TASK [Rotate passwords] ************************************************************************************************************************************************************************************ changed: [undercloud-0] TASK [Create rotated password parameter fact] ************************************************************************************************************************************************************** ok: [undercloud-0] TASK [Update existing password environment file] *********************************************************************************************************************************************************** changed: [undercloud-0] PLAY RECAP ************************************************************************************************************************************************************************************************* undercloud-0 : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '300 allow custom application 1': dport: 999 proto: udp '301 allow custom application 2': dport: 8081 proto: tcp EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "[tripleo-admin@overcloud-controller-2 ~]USD sudo iptables -L | grep rabbitmq ACCEPT tcp -- anywhere anywhere multiport dports vtr-emulator,epmd,amqp,25672,25673:25683 state NEW /* 109 rabbitmq-bundle ipv4 */", "cat > ~/templates/firewall.yaml <<EOF parameter_defaults: ExtraFirewallRules: '098 allow rabbit from internalapi network': dport: - 4369 - 5672 - 25672 proto: tcp source: 10.0.0.0/24 '099 drop other rabbit access': dport: - 4369 - 5672 - 25672 proto: tcp action: drop EOF", "openstack overcloud deploy --templates / -e /home/stack/templates/firewall.yaml / .", "parameter_defaults: ExtraConfig: snmp::ro_community: mysecurestring snmp::ro_community6: myv6securestring", "parameter_defaults: ExtraConfig: snmp::com2sec: [\"notConfigUser default mysecurestring\"] snmp::com2sec6: [\"notConfigUser default myv6securestring\"]", "NeutronOVSFirewallDriver: openvswitch", "cd ~/templates tree . ├── environments │ └── network-environment.yaml ├── hci.yaml ├── network │ └── config │ └── multiple-nics │ ├── computehci.yaml │ ├── compute.yaml │ └── controller.yaml ├── network_data.yaml ├── plan-environment.yaml └── roles_data_hci.yaml", "cd ~/templates find . -name *role* > ./templates/roles_data_hci.yaml", "openstack overcloud roles generate > --roles-path /usr/share/openstack-tripleo-heat-templates/roles > -o roles_data.yaml Controller Compute", "source ~/stackrc", "openstack baremetal node list -c Name +--------------+ | Name | +--------------+ | controller-0 | | controller-1 | | controller-2 | | compute-0 | | compute-1 | | compute-2 | +--------------+", "openstack baremetal introspection data save <node> | jq", "openstack baremetal introspection data save controller-0 | jq '.inventory | keys' [ \"bmc_address\", \"bmc_v6address\", \"boot\", \"cpu\", \"disks\", \"hostname\", \"interfaces\", \"memory\", \"system_vendor\" ]", "openstack baremetal introspection data save controller-1 | jq '.inventory.disks' [ { \"name\": \"/dev/sda\", \"model\": \"QEMU HARDDISK\", \"size\": 85899345920, \"rotational\": true, \"wwn\": null, \"serial\": \"QM00001\", \"vendor\": \"ATA\", \"wwn_with_extension\": null, \"wwn_vendor_extension\": null, \"hctl\": \"0:0:0:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:00:01.1-ata-1\" } ]", "cat /etc/hosts source stackrc ; openstack endpoint list source overcloudrc ; openstack endpoint list", "ssh tripleo-admin@compute-0 podman ps", "parameter_defaults KeystoneLockoutDuration: 3600 KeystoneLockoutFailureAttempts: 3", "openstack overcloud deploy --templates -e keystone_config.yaml", "./cipherscan https://openstack.lab.local ........................... Target: openstack.lab.local:443 prio ciphersuite protocols pfs curves 1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 ECDH,P-256,256bits prime256v1 2 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 ECDH,P-256,256bits prime256v1 3 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 DH,1024bits None 4 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 DH,1024bits None 5 ECDHE-RSA-AES128-SHA256 TLSv1.2 ECDH,P-256,256bits prime256v1 6 ECDHE-RSA-AES256-SHA384 TLSv1.2 ECDH,P-256,256bits prime256v1 7 ECDHE-RSA-AES128-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 8 ECDHE-RSA-AES256-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 9 DHE-RSA-AES128-SHA256 TLSv1.2 DH,1024bits None 10 DHE-RSA-AES128-SHA TLSv1.2 DH,1024bits None 11 DHE-RSA-AES256-SHA256 TLSv1.2 DH,1024bits None 12 DHE-RSA-AES256-SHA TLSv1.2 DH,1024bits None 13 ECDHE-RSA-DES-CBC3-SHA TLSv1.2 ECDH,P-256,256bits prime256v1 14 EDH-RSA-DES-CBC3-SHA TLSv1.2 DH,1024bits None 15 AES128-GCM-SHA256 TLSv1.2 None None 16 AES256-GCM-SHA384 TLSv1.2 None None 17 AES128-SHA256 TLSv1.2 None None 18 AES256-SHA256 TLSv1.2 None None 19 AES128-SHA TLSv1.2 None None 20 AES256-SHA TLSv1.2 None None 21 DES-CBC3-SHA TLSv1.2 None None Certificate: trusted, 2048 bits, sha256WithRSAEncryption signature TLS ticket lifetime hint: None NPN protocols: None OCSP stapling: not supported Cipher ordering: server Curves ordering: server - fallback: no Server supports secure renegotiation Server supported compression methods: NONE TLS Tolerance: yes Intolerance to: SSL 3.254 : absent TLS 1.0 : PRESENT TLS 1.1 : PRESENT TLS 1.2 : absent TLS 1.3 : absent TLS 1.4 : absent", "Intolerance to: SSL 3.254 : absent TLS 1.0 : PRESENT TLS 1.1 : PRESENT TLS 1.2 : absent TLS 1.3 : absent TLS 1.4 : absent", "search example.com bigcorp.com nameserver USDIDM_SERVER_IP_ADDR", "sudo dnf install -y python3-ipalib python3-ipaclient krb5-devel", "export IPA_DOMAIN=bigcorp.com export IPA_REALM=BIGCORP.COM export IPA_ADMIN_USER=USDIPA_USER 1 export IPA_ADMIN_PASSWORD=USDIPA_PASSWORD 2 export IPA_SERVER_HOSTNAME=ipa.bigcorp.com export UNDERCLOUD_FQDN=undercloud.example.com 3 export USER=stack export CLOUD_DOMAIN=example.com", "ansible-playbook --ssh-extra-args \"-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null\" /usr/share/ansible/tripleo-playbooks/undercloud-ipa-install.yaml", "undercloud_nameservers = USDIDM_SERVER_IP_ADDR overcloud_domain_name = example.com", "parameter_defaults: certmonger_krb_realm: EXAMPLE.COMPANY.COM", "custom_env_files = /home/stack/hiera_override.yaml", "openstack undercloud install", "kinit admin ipa host-find", "ls /etc/novajoin/krb5.keytab", "parameter_defaults: DnsSearchDomains: [\"example.com\"] CloudDomain: example.com CloudName: overcloud.example.com CloudNameInternal: overcloud.internalapi.example.com CloudNameStorage: overcloud.storage.example.com CloudNameStorageManagement: overcloud.storagemgmt.example.com CloudNameCtlplane: overcloud.ctlplane.example.com IdMServer: freeipa-0.redhat.local IdMDomain: redhat.local IdMInstallClientPackages: False resource_registry: OS::TripleO::Services::IpaClient: /usr/share/openstack-tripleo-heat-templates/deployment/ipa/ipaservices-baremetal-ansible.yaml", "CertmongerKerberosRealm: EXAMPLE.COMPANY.COM", "DEFAULT_TEMPLATES=/usr/share/openstack-tripleo-heat-templates/ CUSTOM_TEMPLATES=/home/stack/templates openstack overcloud deploy -e USD{DEFAULT_TEMPLATES}/environments/ssl/tls-everywhere-endpoints-dns.yaml -e USD{DEFAULT_TEMPLATES}/environments/services/haproxy-public-tls-certmonger.yaml -e USD{DEFAULT_TEMPLATES}/environments/ssl/enable-internal-tls.yaml -e USD{CUSTOM_TEMPLATES}/tls-parameters.yaml", "openstack endpoint list", "parameter_defaults: MemcachedTLS: true MemcachedPort: 11212", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml -e /home/stack/memcached.yaml", "parameter_defaults: CertificateKeySize: '4096' RedisCertificateKeySize: '2048'", "#File modified by ipa-client-install [global] basedn = dc=redhat,dc=local realm = REDHAT.LOCAL domain = redhat.local server = freeipa-0.redhat.local host = undercloud-0.redhat.local xmlrpc_uri = https://freeipa-0.redhat.local/ipa/xml enable_ra = True", "export IPA_DOMAIN=bigcorp.com export IPA_REALM=BIGCORP.COM export IPA_ADMIN_USER=USDIPA_USER export IPA_ADMIN_PASSWORD=USDIPA_PASSWORD export IPA_SERVER_HOSTNAME=ipa.bigcorp.com export UNDERCLOUD_FQDN=undercloud.example.com export USER=stack export CLOUD_DOMAIN=example.com", "sudo mkdir -p /etc/pki/CA sudo touch /etc/pki/CA/index.txt", "echo '1000' | sudo tee /etc/pki/CA/serial", "openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem", "parameter_defaults: PublicTLSCAFile: /etc/pki/ca-trust/source/anchors/cacert.pem", "sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "openssl genrsa -out server.key.pem 2048", "cp /etc/pki/tls/openssl.cnf .", "[req] distinguished_name = req_distinguished_name req_extensions = v3_req [req_distinguished_name] countryName = Country Name (2 letter code) countryName_default = AU stateOrProvinceName = State or Province Name (full name) stateOrProvinceName_default = Queensland localityName = Locality Name (eg, city) localityName_default = Brisbane organizationalUnitName = Organizational Unit Name (eg, section) organizationalUnitName_default = Red Hat commonName = Common Name commonName_default = 192.168.0.1 commonName_max = 64 Extensions to add to a certificate request basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.0.1 DNS.1 = instack.localdomain DNS.2 = vip.localdomain DNS.3 = 192.168.0.1", "openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem", "sudo mkdir -p /etc/pki/CA/newcerts", "sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem", "cat server.crt.pem server.key.pem > undercloud.pem", "sudo mkdir /etc/pki/undercloud-certs sudo cp ~/undercloud.pem /etc/pki/undercloud-certs/. sudo semanage fcontext -a -t etc_t \"/etc/pki/undercloud-certs(/.*)?\" sudo restorecon -R /etc/pki/undercloud-certs", "undercloud_service_certificate = /etc/pki/undercloud-certs/undercloud.pem", "sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract", "openssl crl2pkcs7 -nocrl -certfile /etc/pki/tls/certs/ca-bundle.crt | openssl pkcs7 -print_certs -text | grep <CN of the CA issuer> -A 10 -B 10", "cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml ~/templates/.", "parameter_defaults: SSLCertificate: | -----BEGIN CERTIFICATE----- MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGS sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQ -----END CERTIFICATE-----", "parameter_defaults: SSLIntermediateCertificate: | -----BEGIN CERTIFICATE----- sFW3S2roS4X0Af/kSSD8mlBBTFTCMBAj6rtLBKLaQbIxEpIzrgvpBCwUAMFgxCzAJB MIIDgzCCAmugAwIBAgIJAKk46qw6ncJaMA0GCSqGSIb3DQE -----END CERTIFICATE-----", "parameter_defaults: SSLKey: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqVw8lnQ9RbeI1EdLN5PJP0lVO ctlKn3rAAdyumi4JDjESAXHIKFjJNOLrBmpQyES4X -----END RSA PRIVATE KEY-----", "cp -r /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor-hiera.yaml ~/templates/.", "parameter_defaults: CAMap: undercloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCS BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBw UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBA -----END CERTIFICATE----- overcloud-ca: content: | -----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJAIc75A7FD++DMA0GCS BAMMD3d3dy5leGFtcGxlLmNvbTAeFw0xOTAxMz Um54yGCARyp3LpkxvyfMXX1DokpS1uKi7s6CkF -----END CERTIFICATE-----", "openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml", "openstack overcloud deploy --templates [...] --limit Controller --tags facts,host_prep_steps -e /home/stack/templates/enable-tls.yaml -e ~/templates/custom-domain.yaml -e ~/templates/inject-trust-anchor-hiera.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-dns.yaml", "[stack@director ~]USD source ~/stackrc [stack@director ~]USD openstack server list -------------------------------------- ------------------------- -------- ---------------------+ | ID | Name | Status | Networks | -------------------------------------- ------------------------- -------- ---------------------+ | 756fbd73-e47b-46e6-959c-e24d7fb71328 | overcloud-controller-0 | ACTIVE | ctlplane=192.0.2.16 | | 62b869df-1203-4d58-8e45-fac6cd4cfbee | overcloud-novacompute-0 | ACTIVE | ctlplane=192.0.2.8 | -------------------------------------- ------------------------- -------- ---------------------+", "[tripleo-admin@overcloud-controller-0 ~]USD ssh [email protected]", "[tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token driver sql [tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token provider fernet", "[tripleo-admin@overcloud-controller-0 ~]USD exit [stack@director ~]USD source ~/overcloudrc [stack@director ~]USD openstack token issue ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2016-09-20 05:26:17+00:00 | | id | gAAAAABX4LppE8vaiFZ992eah2i3edpO1aDFxlKZq6a_RJzxUx56QVKORrmW0-oZK3-Xuu2wcnpYq_eek2SGLz250eLpZOzxKBR0GsoMfxJU8mEFF8NzfLNcbuS-iz7SV-N1re3XEywSDG90JcgwjQfXW-8jtCm-n3LL5IaZexAYIw059T_-cd8 | | project_id | 26156621d0d54fc39bf3adb98e63b63d | | user_id | 397daf32cadd490a8f3ac23a626ac06c | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "fips-mode-setup --enable", "fips-mode-setup --check", "sudo dnf -y install rhosp-director-images-uefi-fips-x86_64", "mkdir /home/stack/images cd /home/stack/images", "for i in /usr/share/rhosp-director-images/*fips*.tar; do tar -xvf USDi; done", "ln -s ironic-python-agent-fips.initramfs ironic-python-agent.initramfs ln -s ironic-python-agent-fips.kernel ironic-python-agent.kernel ln -s overcloud-hardened-uefi-full-fips.qcow2 overcloud-hardened-uefi-full.qcow2", "openstack overcloud image upload --update-existing --whole-disk", "openstack overcloud deploy -e /usr/share/openstack-tripleo-heat-templates/environments/fips.yaml", "openstack overcloud deploy --templates ... -e /usr/share/openstack-tripleo-heat-templates/environments/enable-secure-rbac.yaml", "openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> admin", "openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> member", "openstack role add --user <user> --user-domain <domain> --project <project> --project-domain <project-domain> reader", "/etc/USDservice/policy.json", "/var/lib/config-data/puppet-generated/USDservice/etc/USDservice/policy.json", "exec -i keystone oslopolicy-policy-generator --namespace keystone", "\"identity:create_user\": \"rule:admin_required\"", "\"identity:create_user\": \"!\"", "openstack role create PowerUsers +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 7061a395af43455e9057ab631ad49449 | | name | PowerUsers | +-----------+----------------------------------+", "openstack role add --project [PROJECT_NAME] --user [USER_ID] [PowerUsers-ROLE_ID]", "sudo podman exec -i nova_compute oslopolicy-policy-generator --namespace nova", "sudo podman exec -it nova_compute oslopolicy-policy-generator --namespace nova > /tmp/nova-policy.json", "\"os_compute_api:servers:start\": \"role:PowerUsers\", \"os_compute_api:servers:stop\": \"role:PowerUsers\" \"os_compute_api:servers:create:attach_volume\": \"role:PowerUsers\" \"os_compute_api:os-volumes-attachments:index\": \"role:PowerUsers\" \"os_compute_api:os-volumes-attachments:create\": \"role:PowerUsers\" \"os_compute_api:os-volumes-attachments:show\": \"role:PowerUsers\" \"os_compute_api:os-volumes-attachments:update\": \"role:PowerUsers\" \"os_compute_api:os-volumes-attachments:delete\": \"role:PowerUsers\"", "cp /tmp/nova-policy.json nova_compute:etc/nova/policy.json", "\"os_compute_api:os-keypairs:delete\": \"rule:admin_api or user_id:%(user_id)s\"", "\"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\"", "\"admin_or_user\": \"is_admin:True or user_id:%(user_id)s\" \"os_compute_api:os-instance-actions\": \"rule:admin_or_user\"", "parameter_defaults: KeystonePolicies: { keystone-context_is_admin: { key: context_is_admin, value: 'role:admin' } }", "parameter_defaults: NovaApiPolicies: { nova-context_is_admin: { key: 'compute:get_all', value: '@' } }", "openstack role list -c Name -f value swiftoperator ResellerAdmin admin _member_ heat_stack_user", "openstack role assignment list --names --role admin +-------+------------------------------------+-------+-----------------+------------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +-------+------------------------------------+-------+-----------------+------------+--------+-----------+ | admin | heat-cfn@Default | | service@Default | | | False | | admin | placement@Default | | service@Default | | | False | | admin | neutron@Default | | service@Default | | | False | | admin | zaqar@Default | | service@Default | | | False | | admin | swift@Default | | service@Default | | | False | | admin | admin@Default | | admin@Default | | | False | | admin | zaqar-websocket@Default | | service@Default | | | False | | admin | heat@Default | | service@Default | | | False | | admin | ironic-inspector@Default | | service@Default | | | False | | admin | nova@Default | | service@Default | | | False | | admin | ironic@Default | | service@Default | | | False | | admin | glance@Default | | service@Default | | | False | | admin | mistral@Default | | service@Default | | | False | | admin | heat_stack_domain_admin@heat_stack | | | heat_stack | | False | | admin | admin@Default | | | | all | False | +-------+------------------------------------+-------+-----------------+------------+--------+-----------+", "openstack token issue --debug 2>&1 | egrep ^'{\\\"token\\\":' > access.file.json", "ssh tripleo-admin@CONTROLLER-1 sudo podman exec cinder_api oslopolicy-policy-generator --config-file /etc/cinder/cinder.conf --namespace cinder > cinderpolicy.json", "oslopolicy-checker --policy cinderpolicy.json --access access.file.json", "cat /proc/cpuinfo | egrep \"vmx|svm\"", "ssh stack@director", "ansible-playbook -i /home/stack/overcloud-deploy/<stack_name>/tripleo-ansible-inventory.yaml \\ 1 -e undercloud_backup_folder=/home/stack/overcloud_backup_keys \\ 2 -e stack_name=<stack_name> \\ 3 /usr/share/ansible/tripleo-playbooks/ssh_key_rotation.yaml", "ansible-playbook -i /home/stack/overcloud-deploy/<stack_name>/tripleo-ansible-inventory.yaml \\ 1 -e stack_name=<stack_name> -e rotate_undercloud_key=false \\ 2 -e ansible_ssh_private_key_file=/home/stack/overcloud_backup_keys/id_rsa 3 tripleo-ansible/playbooks/ssh_key_rotation.yaml", "sudo podman inspect <container_name> | less", "sudo less /var/log/containers/nova/nova-compute.log", "exec -it nova_compute /bin/bash", "sudo sed -i 's/^debug=.*/debug=True' /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf", "sudo podman restart nova_compute", "cat > /home/stack/templates/libvirt-keysize.yaml parameter_defaults: LibvirtCertificateKeySize: 4096 EOF", "openstack overcloud deploy --templates -e /home/stack/templates/libvirt-keysize.yaml", "./deploy.sh", "resource_registry: OS::TripleO::Services::Sshd: /usr/share/openstack-tripleo-heat-templates/deployment/sshd/sshd-baremetal-puppet.yaml parameter_defaults: BannerText: | ****************************************************************** * This system is for the use of authorized users only. Usage of * * this system may be monitored and recorded by system personnel. * * Anyone using this system expressly consents to such monitoring * * and is advised that if such monitoring reveals possible * * evidence of criminal activity, system personnel may provide * * the evidence from such monitoring to law enforcement officials.* ******************************************************************", "openstack overcloud deploy --templates -e <full environment> -e ssh_banner.yaml", "resource_registry: OS::TripleO::Services::AuditD: /usr/share/openstack-tripleo-heat-templates/deployment/auditd/auditd-baremetal-puppet.yaml parameter_defaults: AuditdRules: 'Record Events that Modify User/Group Information': content: '-w /etc/group -p wa -k audit_rules_usergroup_modification' order : 1 'Collects System Administrator Actions': content: '-w /etc/sudoers -p wa -k actions' order : 2 'Record Events that Modify the Systems Mandatory Access Controls': content: '-w /etc/selinux/ -p wa -k MAC-policy' order : 3", "parameter_defaults: ControllerExtraConfig: ExtraFirewallRules: '301 allow zabbix': dport: 10050 proto: tcp source: 10.0.0.8", "parameter_defaults: ControllerParameters ExtraFirewallRules: '098 allow rabbit from internalapi network': dport: [4369,5672,25672] proto: tcp source: 10.0.0.0/24 '099 drop other rabbit access: dport: [4369,5672,25672] proto: tcp action: drop", "iptables-save [...] -A INPUT -p tcp -m multiport --dports 4369,5672,25672 -m comment --comment \"109 rabbitmq\" -m state --state NEW -j ACCEPT", "ExtraFirewallRules: '109 rabbitmq': dport: - 4369 - 5672 - 25672", "resource_registry: OS::TripleO::Services::Aide: /usr/share/openstack-tripleo-heat-templates/deployment/aide/aide-baremetal-ansible.yaml parameter_defaults: AideRules: 'TripleORules': content: 'TripleORules = p+sha256' order: 1 'etc': content: '/etc/ TripleORules' order: 2 'boot': content: '/boot/ TripleORules' order: 3 'sbin': content: '/sbin/ TripleORules' order: 4 'var': content: '/var/ TripleORules' order: 5 'not var/log': content: '!/var/log.*' order: 6 'not var/spool': content: '!/var/spool.*' order: 7 'not nova instances': content: '!/var/lib/nova/instances.*' order: 8", "openstack overcloud deploy --templates -e /home/stack/templates/aide.yaml", "MyAlias = p+i+n+u+g+s+b+m+c+sha512", "resource_registry: OS::TripleO::Services::Securetty: ../puppet/services/securetty.yaml parameter_defaults: TtyValues: - console - tty1 - tty2 - tty3 - tty4 - tty5 - tty6", "parameter_defaults: KeystoneNotificationFormat: cadf", "resource_registry: OS::TripleO::Services::LoginDefs: ../puppet/services/login-defs.yaml parameter_defaults: PasswordMaxDays: 60 PasswordMinDays: 1 PasswordMinLen: 5 PasswordWarnAge: 7 FailDelay: 4", "parameter_defaults: HorizonAllowedHosts: <value>", "parameter_defaults: ControllerExtraConfig: horizon::disallow_iframe_embed: false", "horizon::enable_secure_proxy_ssl_header: true", "ssh tripleo-admin@controller-0", "sudo egrep ^SECURE_PROXY_SSL_HEADER /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')", "horizon::cache_backend: django.core.cache.backends.memcached.MemcachedCache horizon::django_session_engine: 'django.contrib.sessions.backends.cache'", "parameter_defaults: HorizonPasswordValidator: '^.{8,18}USD' HorizonPasswordValidatorHelp: 'Password must be between 8 and 18 characters.'", "openstack overcloud deploy --templates -e <full environment> -e horizon_password.yaml", "parameter_defaults: ControllerExtraConfig: horizon::enforce_password_check: false", "parameter_defaults: ControllerExtraConfig: horizon::disable_password_reveal: false", "<snip> <div class=\"container\"> <div class=\"row-fluid\"> <div class=\"span12\"> <div id=\"brand\"> <img src=\"../../static/themes/rcue/images/RHOSP-Login-Logo.svg\"> </div><!--/#brand--> </div><!--/.span*--> <!-- Start of Logon Banner --> <p>Authentication to this information system reflects acceptance of user monitoring agreement.</p> <!-- End of Logon Banner --> {% include 'auth/_login.html' %} </div><!--/.row-fluid-> </div><!--/.container--> {% block js %} {% include \"horizon/_scripts.html\" %} {% endblock %} </body> </html>", "<Directory /> LimitRequestBody 10737418240 </Directory>", "<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>", "<Directory \"/var/www\"> LimitRequestBody 10737418240 </Directory>", "manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+", "api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ USD manila type-access-add my_type df29a37db5ae48d19b349fe947fada46", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "{ \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", }", "chown -R root:swift /var/lib/config-data/puppet-generated/swift/etc/swift/* find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type f -exec chmod 640 {} \\; find /var/lib/config-data/puppet-generated/swift/etc/swift/ -type d -exec chmod 750 {} \\;", "The sanitization process removes information from the media such that the information cannot be retrieved or reconstructed. Sanitization techniques, including clearing, purging, cryptographic erase, and destruction, prevent the disclosure of information to unauthorized individuals when such media is reused or released for disposal.", "parameter_defaults: VerifyGlanceSignatures: True", "[ {rabbit, [ {tcp_listeners, [] }, {ssl_listeners, [{\"<IP address or hostname of management network interface>\", 5671}] }, {ssl_options, [{cacertfile,\"/etc/ssl/cacert.pem\"}, {certfile,\"/etc/ssl/rabbit-server-cert.pem\"}, {keyfile,\"/etc/ssl/rabbit-server-key.pem\"}, {verify,verify_peer}, {fail_if_no_peer_cert,true}]} ]} ].", "[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_use_ssl = True rabbit_host = RABBIT_HOST rabbit_port = 5671 rabbit_user = compute01 rabbit_password = RABBIT_PASS kombu_ssl_keyfile = /etc/ssl/node-key.pem kombu_ssl_certfile = /etc/ssl/node-cert.pem kombu_ssl_ca_certs = /etc/ssl/cacert.pem", "[DEFAULT] rpc_backend = nova.openstack.common.rpc.impl_qpid qpid_protocol = ssl qpid_hostname = <IP or hostname of management network interface of messaging server> qpid_port = 5671 qpid_username = compute01 qpid_password = QPID_PASS", "qpid_sasl_mechanisms = <space separated list of SASL mechanisms to use for auth>", "cat open-up-glance-api-metadef.yaml", "GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }", "openstack overcloud deploy -e open-up-glance-api-metadef.yaml", "touch ~/templates/tls-ciphers.yaml", "parameter_defaults: ExtraConfig: # TLSv1.3 configuration tripleo::haproxy::ssl_options:: 'ssl-min-ver TLSv1.3'", "openstack overcloud deploy --templates -e /home/stack/templates/tls-ciphers.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/hardening_red_hat_openstack_platform/index
Chapter 2. Understanding Operators
Chapter 2. Understanding Operators 2.1. What are Operators? Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers. Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor's engineering team, monitoring a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes. 2.1.1. Why use Operators? Operators provide: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two. Why deploy on Kubernetes? Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems - secret handling, load balancing, service discovery, autoscaling - that work across on-premises and cloud providers. Why manage your app with Kubernetes APIs and kubectl tooling? These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB , looks and acts just like the built-in, native Kubernetes objects. How do Operators compare with service brokers? A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well. 2.1.2. Operator Framework The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems: Operator SDK The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Manager Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Container Platform 4.14. Operator Registry The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM. OperatorHub OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform. These tools are designed to be composable, so you can use any that are useful to you. 2.1.3. Operator maturity model The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator. One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator: Figure 2.1. Operator maturity model The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK. 2.2. Operator Framework packaging format This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.2.1. Bundle format The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata. An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image , which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay. Operator metadata can include: Information that identifies the Operator, for example its name and version. Additional information that drives the UI, for example its icon and some example custom resources (CRs). Required and provided APIs. Related images. When loading manifests into the Operator Registry database, the following requirements are validated: The bundle must have at least one channel defined in the annotations. Every bundle has exactly one cluster service version (CSV). If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle. 2.2.1.1. Manifests Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator. A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory. Example bundle format layout etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml Additionally supported objects The following object types can also be optionally included in the /manifests directory of a bundle: Supported optional object types ClusterRole ClusterRoleBinding ConfigMap ConsoleCLIDownload ConsoleLink ConsoleQuickStart ConsoleYamlSample PodDisruptionBudget PriorityClass PrometheusRule Role RoleBinding Secret Service ServiceAccount ServiceMonitor VerticalPodAutoscaler When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV: Lifecycle for optional objects When the CSV is deleted, OLM deletes the optional object. When the CSV is upgraded: If the name of the optional object is the same, OLM updates it in place. If the name of the optional object has changed between versions, OLM deletes and recreates it. 2.2.1.2. Annotations A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles: Example annotations.yaml annotations: operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1 operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2 operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3 operators.operatorframework.io.bundle.package.v1: "test-operator" 4 operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5 operators.operatorframework.io.bundle.channel.default.v1: "stable" 6 1 The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects. 2 The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/ . The value manifests.v1 implies that the bundle contains Operator manifests. 3 The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/ . The value metadata.v1 implies that this bundle has Operator metadata. 4 The package name of the bundle. 5 The list of channels the bundle is subscribing to when added into an Operator Registry. 6 The default channel an Operator should be subscribed to when installed from a registry. Note In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file. 2.2.1.3. Dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 Additional resources Operator Lifecycle Manager dependency resolution 2.2.1.4. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. See CLI tools for steps on installing the opm CLI. 2.2.2. File-based catalogs File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade edges Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.2.2.1. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog could also be included in a parent catalog by copying it into the parent catalog's root directory. 2.2.2.2. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 2.2.2.2.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 2.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 2.2.2.2.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles. A bundle can included as an entry in multiple olm.channel blobs, but it can have only one entry per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 2.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are therefore no longer installable by users with the spec.startingCSV property of Subscription objects. If you want to have direct (one version increment) updates to an Operator version from multiple versions, and also keep those versions available to users for installation, always use the skipRange field along with the replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 2.2.2.2.3. olm.bundle schema Example 2.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 2.2.2.3. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 2.2.2.3.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 2.4. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 2.2.2.3.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 2.5. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.3.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 2.6. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 2.2.2.3.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 2.7. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.4. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm alpha generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 2.2.2.5. Guidelines Consider the following guidelines when maintaining file-based catalogs. 2.2.2.5.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 2.2.2.5.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 2.2.2.6. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 2.2.2.7. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 2.2.3. RukPak (Technology Preview) Important RukPak is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.12 introduces the platform Operator type as a Technology Preview feature. The platform Operator mechanism relies on the RukPak component, also introduced in OpenShift Container Platform 4.12, and its resources to manage content. OpenShift Container Platform 4.14 introduces Operator Lifecycle Manager (OLM) 1.0 as a Technology Preview feature, which also relies on the RukPak component. RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy. RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions. At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs. Common terminology Bundle A collection of Kubernetes manifests that define content to be deployed to a cluster Bundle image A container image that contains a bundle within its filesystem Bundle Git repository A Git repository that contains a bundle within a directory Provisioner Controllers that install and manage content on a Kubernetes cluster Bundle deployment Generates deployed instances of a bundle Additional resources Managing platform Operators Technology Preview restrictions for platform Operators About Operator Lifecycle Manager 1.0 (Technology Preview) 2.2.3.1. Bundle A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content. Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type. Example Bundle object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain Note Bundles are considered immutable after they are created. 2.2.3.1.1. Bundle immutability After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object. Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status , are updated during the bundle's lifecycle; it is only the spec field that is considered immutable. Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle: USD oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF Example output bundle.core.rukpak.io/combo-tag-ref created Then, patching the bundle to point to a newer tag returns an error: USD oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}' Example output Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place. Further immutability considerations While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario: A user sets an image tag, a Git branch, or a Git tag in the spec.source field of the Bundle object. The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit. A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod. If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content. This is similar to pod behavior, where one of the pod's container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it. To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle. 2.2.3.1.2. Plain bundle spec A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory. The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0 , combines the type of bundle ( plain ) with the current schema version ( v0 ). Note The plain+v0 bundle format is at schema version v0 , which means it is an experimental format that is subject to change. For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application. Example plain+v0 bundle file tree USD tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories. Important Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well. 2.2.3.1.3. Registry bundle spec A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format. Additional resources Legacy OLM bundle format 2.2.3.2. BundleDeployment Warning A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions. The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle. Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept. The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment. Example BundleDeployment object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain 2.2.3.3. About provisioners RukPak consists of a series of controllers, known as provisioners , that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment . These components work together to bring content onto the cluster and install it, generating resources within the cluster. Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles. Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster. A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources. 2.3. Operator Framework glossary of common terms This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK. 2.3.1. Common Operator Framework terms 2.3.1.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 2.3.1.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 2.3.1.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 2.3.1.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 2.3.1.5. Channel head A channel head refers to the latest known update in a particular channel. 2.3.1.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 2.3.1.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 2.3.1.8. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 2.3.1.9. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 2.3.1.10. Multitenancy A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 2.3.1.11. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 2.3.1.12. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 2.3.1.13. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 2.3.1.14. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 2.3.1.15. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added. 2.4. Operator Lifecycle Manager (OLM) 2.4.1. Operator Lifecycle Manager concepts and resources This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.1.1. What is Operator Lifecycle Manager? Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 2.2. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.14, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. 2.4.1.2. OLM resources The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM): Table 2.1. CRDs managed by OLM and Catalog Operators Resource Short name Description ClusterServiceVersion (CSV) csv Application metadata. For example: name, version, icon, required resources. CatalogSource catsrc A repository of CSVs, CRDs, and packages that define an application. Subscription sub Keeps CSVs up to date by tracking a channel in a package. InstallPlan ip Calculated list of resources to be created to automatically install or upgrade a CSV. OperatorGroup og Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. OperatorConditions - Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM. 2.4.1.2.1. Cluster service version A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster. OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm , deb , or apk bundle. A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo. A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment. 2.4.1.2.2. Catalog source A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Container Platform web console also displays the Operators provided by catalog sources. Tip Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration Cluster Settings Configuration OperatorHub page in the web console. The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. Example 2.8. Example CatalogSource object \ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace 1 Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace. 2 Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace . The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. 3 Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog's index image version as part of cluster upgrades. Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details. 4 Display name for the catalog in the web console and CLI. 5 Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time. 6 Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs. 7 Source types include the following: grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. configmap : OLM parses config map data and runs a pod that can serve the gRPC API over it. 8 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 9 Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image , if defined. 10 Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image , if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ( "" ) assigns the pod the default priority. Other priority classes can be defined manually. 11 Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image , if defined. 12 Automatically check for new versions at a given interval to stay up-to-date. 13 Last observed state of the catalog connection. For example: READY : A connection is successfully established. CONNECTING : A connection is attempting to establish. TRANSIENT_FAILURE : A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again. See States of Connectivity in the gRPC documentation for more details. 14 Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date. 15 Status information for the catalog's Operator Registry service. Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator: Example 2.9. Example Subscription object referencing a catalog source apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace Additional resources Understanding OperatorHub Red Hat-provided Operator catalogs Adding a catalog source to a cluster Catalog priority Viewing Operator catalog source status by using the CLI Understanding and managing pod security admission Catalog source pod scheduling 2.4.1.2.2.1. Image template for custom catalog sources Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Container Platform 4.14. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.13 to 4.14, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.13 to: registry.redhat.io/redhat/redhat-operator-index:v4.14 However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. Starting in OpenShift Container Platform 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template: kube_major_version kube_minor_version kube_patch_version Note You must specify the Kubernetes cluster version and not an OpenShift Container Platform cluster version, as the latter is not currently available for templating. Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path. Important You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade. Example 2.10. Example catalog source with an image template apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.27 priority: -400 publisher: Example Org Note If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value. If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition. For an OpenShift Container Platform 4.14 cluster, which uses Kubernetes 1.27, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference: quay.io/example-org/example-catalog:v1.27 For future releases of OpenShift Container Platform, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Container Platform version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Container Platform version would then automatically update the catalog's index image as well. 2.4.1.2.2.2. Catalog health requirements Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster. For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A. As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator. As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog. Additional resources Removing custom catalogs Disabling the default OperatorHub catalog sources 2.4.1.2.3. Subscription A subscription , defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha , beta , or stable , helps determine which Operator stream should be installed from the catalog source. The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster. Additional resources Multitenancy and Operator colocation Viewing Operator subscription status by using the CLI 2.4.1.2.4. Install plan An install plan , defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV). To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator. The install plan must then be approved according to one of the following approval strategies: If the subscription's spec.installPlanApproval field is set to Automatic , the install plan is approved automatically. If the subscription's spec.installPlanApproval field is set to Manual , the install plan must be manually approved by a cluster administrator or user with proper permissions. After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription. Example 2.11. Example InstallPlan object apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: ... catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- ... name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- ... name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- ... name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created ... Additional resources Multitenancy and Operator colocation Allowing non-cluster administrators to install Operators 2.4.1.2.5. Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. Additional resources Operator groups 2.4.1.2.6. Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. Additional resources Operator conditions 2.4.2. Operator Lifecycle Manager architecture This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.2.1. Component responsibilities Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 2.2. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 2.3. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions 2.4.2.2. OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. 2.4.2.3. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. 2.4.2.4. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. 2.4.3. Operator Lifecycle Manager workflow This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.3.1. Operator installation and upgrade workflow in OLM In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades: ClusterServiceVersion (CSV) CatalogSource Subscription Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API , to query for available Operators as well as upgrades for installed Operators. Figure 2.3. Catalog source overview Within a catalog source, Operators are organized into packages and streams of updates called channels , which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers. Figure 2.4. Packages and channels in a Catalog source A user indicates a particular package and channel in a particular catalog source in a subscription , for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed. Note OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog channel package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository. Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates: Figure 2.5. OLM graph of available channel updates Example channels in a package packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV. 2.4.3.1.1. Example upgrade path For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1 . OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2 , which in turn replaces the older and installed CSV version 0.1.1 . OLM walks back from the channel head to versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 0.1.2 0.1.1 ; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head. For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1 . Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2 . At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed. 2.4.3.1.2. Skipping upgrades The basic path for upgrades in OLM is: A catalog source is updated with one or more updates to an Operator. OLM traverses every version of the Operator until reaching the latest version the catalog source contains. However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability. In those cases, OLM must consider two cluster states and provide an update graph that supports both: The "bad" intermediate Operator has been seen by the cluster and installed. The "bad" intermediate Operator has not yet been installed onto the cluster. By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet. Example CSV with skipped release apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1 Consider the following example of Old CatalogSource and New CatalogSource . Figure 2.6. Skipping updates This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . If the bad update has not yet been installed, it will never be. 2.4.3.1.3. Replacing multiple Operators Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation: olm.skipRange: <semver_range> where <semver_range> has the version range format supported by the semver library . When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel. The order of precedence is: Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met. The Operator that replaces the current one, in the source specified by sourceName . Channel head in another source that is visible to the subscription, if the other criteria for skipping are met. The Operator that replaces the current one in any source visible to the subscription. Example CSV with skipRange apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2' 2.4.3.1.4. Z-stream support A z-stream , or patch release, must replace all z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog. In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource : Figure 2.7. Replacing several Operators This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource . Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this. 2.4.4. Operator Lifecycle Manager dependency resolution This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.4.1. About dependency resolution Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm . However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other. As a result, OLM must never create the following scenarios: Install a set of Operators that require APIs that cannot be provided Update an Operator in a way that breaks another that depends upon it This is made possible with two types of data: Properties Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. Constraints or dependencies An Operator's requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed. 2.4.4.2. Operator properties All Operators in a catalog have the following properties: olm.package Includes the name of the package and the version of the Operator olm.gvk A single property for each provided API from the cluster service version (CSV) Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle. Example arbitrary property properties: - type: olm.kubeversion value: version: "1.16.0" 2.4.4.2.1. Arbitrary properties Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime. These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list. Example arbitrary properties properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints. Additional resources Common Expression Language (CEL) constraints 2.4.4.3. Operator dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 2.4.4.4. Generic constraints An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime. The following keys denote the available constraint types: gvk Type whose value and interpretation is identical to the olm.gvk type package Type whose value and interpretation is identical to the olm.package type cel A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information all , any , not Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint 2.4.4.4.1. Common Expression Language (CEL) constraints The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint. Example cel constraint type: olm.constraint value: failureMessage: 'require to have "certified"' cel: rule: 'properties.exists(p, p.type == "certified")' The CEL syntax supports a wide range of logical operators, such as AND and OR . As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint. Example cel constraint with multiple rules type: olm.constraint value: failureMessage: 'require to have "certified" and "stable" properties' cel: rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")' 2.4.4.4.2. Compound constraints (all, any, not) Compound constraint types are evaluated following their logical definitions. The following is an example of a conjunctive constraint ( all ) of two packages and one GVK. That is, they must all be satisfied by installed bundles: Example all constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because... all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for... gvk: group: greens.example.com version: v1 kind: Green The following is an example of a disjunctive constraint ( any ) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles: Example any constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because... any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue The following is an example of a negation constraint ( not ) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set: Example not constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because... not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set. As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense. 2.4.4.4.3. Nested compound constraints A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type. The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint: Example nested compound constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because... any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue Note The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks. 2.4.4.5. Dependency preferences There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear. 2.4.4.5.1. Catalog priority On OpenShift Container Platform cluster, OLM reads catalog sources to know which Operators are available for installation. Example CatalogSource object apiVersion: "operators.coreos.com/v1alpha1" kind: "CatalogSource" metadata: name: "my-operators" namespace: "operators" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: "My Operators" priority: 100 1 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency. There are two rules that govern catalog preference: Options in higher-priority catalogs are preferred to options in lower-priority catalogs. Options in the same catalog as the dependent are preferred to any other catalogs. 2.4.4.5.2. Channel ordering An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels. Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name. 2.4.4.5.3. Order within a channel There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs. When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency. Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first. 2.4.4.5.4. Other constraints In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants. 2.4.4.5.4.1. Subscription constraint A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated. 2.4.4.5.4.2. Package constraint Within a namespace, no two Operators may come from the same package. 2.4.4.5.5. Additional resources Catalog health requirements 2.4.4.6. CRD upgrades OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions: All existing serving versions in the current CRD are present in the new CRD. All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD. Additional resources Adding a new CRD version Deprecating or removing a CRD version 2.4.4.7. Dependency best practices When specifying dependencies, there are best practices you should consider. Depend on APIs or a specific version range of Operators Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead. Set a minimum version The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible. For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended. For example: TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource. TestOperator v1.0.1 adds a new field spec.newfield to MyObject , but still at v1alpha1. Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0. Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum. Omit a maximum version or allow a very wide range Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency. Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0 . Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound. Note Cluster administrators cannot override dependencies set by an Operator author. However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1 . Additional resources Kubernetes documentation: Changing the API 2.4.4.8. Dependency caveats When specifying dependencies, there are caveats you should consider. No compound constraints (AND) There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0 . This means that when specifying a dependency such as: dependencies: - type: olm.package value: packageName: etcd version: ">3.1.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0 . Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other. Cross-namespace compatibility OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa. 2.4.4.9. Example dependency resolution scenarios In the following examples, a provider is an Operator which "owns" a CRD or API service. Example: Deprecating dependent APIs A and B are APIs (CRDs): The provider of A depends on B. The provider of B has a subscription. The provider of B updates to provide C but deprecates B. This results in: B no longer has a provider. A no longer works. This is a case OLM prevents with its upgrade strategy. Example: Version deadlock A and B are APIs: The provider of A requires B. The provider of B requires A. The provider of A updates to (provide A2, require B2) and deprecate A. The provider of B updates to (provide B2, require A2) and deprecate B. If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found. This is another case OLM prevents with its upgrade strategy. 2.4.5. Operator groups This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.5.1. About Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. 2.4.5.2. Operator group membership An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes : Table 2.4. Install modes and supported Operator groups InstallModeType Description OwnNamespace The Operator can be a member of an Operator group that selects its own namespace. SingleNamespace The Operator can be a member of an Operator group that selects one namespace. MultiNamespace The Operator can be a member of an Operator group that selects more than one namespace. AllNamespaces The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "" ). Note If the spec of a CSV omits an entry of InstallModeType , then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it. 2.4.5.3. Target namespace selection You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" Important Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release. If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( "" ), which signals to a consuming Operator that it should watch all namespaces. 2.4.5.4. Operator group CSV annotations Member CSVs of an Operator group have the following annotations: Annotation Description olm.operatorGroup=<group_name> Contains the name of the Operator group. olm.operatorNamespace=<group_namespace> Contains the namespace of the Operator group. olm.targetNamespaces=<target_namespaces> Contains a comma-delimited string that lists the target namespace selection of the Operator group. Note All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants. 2.4.5.5. Provided APIs annotation A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local ... spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local 2.4.5.6. Role-based access control When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: Cluster role Label to match <operatorgroup_name>-admin olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <operatorgroup_name>-edit olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <operatorgroup_name>-view olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict : Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings Table 2.5. Cluster roles generated for each API resource from a CRD Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> <kind>.<group>-<version>-view-crdview Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name> : get Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Table 2.6. Cluster roles generated for each API resource from an API service Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Additional roles and role bindings If the CSV defines exactly one target namespace that contains * , then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. If the CSV does not define exactly one target namespace that contains * , then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. 2.4.5.7. Copied CSVs OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV. Note By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again. Disable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF Enable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF 2.4.5.8. Static Operator groups An Operator group is static if its spec.staticProvidedAPIs field is set to true . As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: "true" Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. 2.4.5.9. Operator group intersection Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Note When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces. Rules for intersection Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV's provided APIs are a subset of the Operator group's: Continue transitioning. If true and the CSV's provided APIs are not a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the union of itself and the CSV's provided APIs. If false and the CSV's provided APIs are not a subset of the Operator group's: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict . If false and the CSV's provided APIs are a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the difference between itself and the CSV's provided APIs. Note Failure states caused by Operator groups are non-terminal. The following actions are performed each time an Operator group synchronizes: The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored. The cluster set is compared to olm.providedAPIs , and if olm.providedAPIs contains any extra APIs, then those APIs are pruned. All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV. 2.4.5.10. Limitations for multitenant Operator management OpenShift Container Platform provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator's API versions must be the same. Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster. The supported scenarios include the following: Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions) Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster. Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation Operators in multitenant clusters Allowing non-cluster administrators to install Operators 2.4.5.11. Troubleshooting Operator groups Membership An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan's namespace. Multiple Operator groups exist in the install plan's namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup . CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection. 2.4.6. Multitenancy and Operator colocation This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM). 2.4.6.1. Colocation of Operators in a namespace Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. This default behavior manifests in two ways: InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace. All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual. These scenarios can lead to the following issues: It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator. It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators. These issues usually surface because, when installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. As a cluster administrator, you can bypass this default behavior manually by using the following workflow: Create a namespace for the installation of the Operator. Create a custom global Operator group , which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces. Install the desired Operator in the installation namespace. If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces". Additional resources Installing global Operators in custom namespaces Operators in multitenant clusters 2.4.7. Operator conditions This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions. 2.4.7.1. About Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. 2.4.7.2. Supported conditions Operator Lifecycle Manager (OLM) supports the following Operator conditions. 2.4.7.2.1. Upgradeable condition The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when: An Operator is about to start a critical process and should not be upgraded until the process is completed. An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded. Important Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section. Example Upgradeable Operator condition apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: "False" 2 reason: "migration" message: "The Operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Name of the condition. 2 A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades. 2.4.7.3. Additional resources Managing Operator conditions Enabling Operator conditions Using pod disruption budgets to specify the number of pods that must be up Graceful termination 2.4.8. Operator Lifecycle Manager metrics 2.4.8.1. Exposed metrics Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack. Table 2.7. Metrics exposed by OLM Name Description catalog_source_count Number of catalog sources. catalogsource_ready State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state. csv_abnormal When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded , for example when it is not installed. Includes the name , namespace , phase , reason , and version labels. A Prometheus alert is created when this metric is present. csv_count Number of CSVs successfully registered. csv_succeeded When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1 ) or not (value 0 ). Includes the name , namespace , and version labels. csv_upgrade_count Monotonic count of CSV upgrades. install_plan_count Number of install plans. installplan_warnings_total Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. olm_resolution_duration_seconds The duration of a dependency resolution attempt. subscription_count Number of subscriptions. subscription_sync_total Monotonic count of subscription syncs. Includes the channel , installed CSV, and subscription name labels. 2.4.9. Webhook management in Operator Lifecycle Manager Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator. See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM. 2.4.9.1. Additional resources Types of webhook admission plugins Kubernetes documentation: Validating admission webhooks Mutating admission webhooks Conversion webhooks 2.5. Understanding OperatorHub 2.5.1. About OperatorHub OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM). Cluster administrators can choose from catalogs grouped into the following categories: Category Description Red Hat Operators Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. Certified Operators Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. Red Hat Marketplace Certified software that can be purchased from Red Hat Marketplace . Community Operators Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. Custom Operators Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions. The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com . 2.5.2. OperatorHub architecture The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace. 2.5.2.1. OperatorHub custom resource The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments. Example OperatorHub custom resource apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: "community-operators", disabled: false } ] 1 disableAllDefaultSources is an override that controls availability of all default catalogs that are configured by default during an OpenShift Container Platform installation. 2 Disable default catalogs individually by changing the disabled parameter value per source. 2.5.3. Additional resources Catalog source About the Operator SDK Defining cluster service versions (CSVs) Operator installation and upgrade workflow in OLM Red Hat Partner Connect Red Hat Marketplace 2.6. Red Hat-provided Operator catalogs Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs , Operator Framework packaging format , and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.6.1. About Operator catalogs An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster. As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content. As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Note Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Container Platform 4.8 and later. When creating custom catalog images, versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders must use the opm index command to manage index images. Additional resources Managing custom catalogs Packaging format Using Operator Lifecycle Manager on restricted networks 2.6.2. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.14 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.14 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.14 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.14 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 2.7. Operators in multitenant clusters The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Container Platform cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege. Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements. Additional resources Common terms: Multitenant Limitations for multitenant Operator management 2.7.1. Default Operator install modes and behavior When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator's capabilities: Single namespace Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace. All namespaces Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator's suggested namespace. This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace: The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them. The namespace-view role can read CR objects of that Operator. For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator's privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces. Additional resources Adding Operators to a cluster Install modes types Setting a suggested namespace 2.7.2. Recommended solution for multitenant clusters While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow: Create a namespace for the tenant Operator that is separate from the tenant's namespace. Create an Operator group for the tenant Operator scoped only to the tenant's namespace. Install the Operator in the tenant Operator namespace. As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator's pod nor its service account are visible or usable by the tenant. This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters". Limitations and considerations This solution only works when the following constraints are met: All instances of the same Operator must be the same version. The Operator cannot have dependencies on other Operators. The Operator cannot ship a CRD conversion webhook. Important You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions: The instance is not the newest version of the Operator. The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster. Warning As an administrator, use caution when allowing non-cluster administrators to install Operators self-sufficiently, as explained in "Allowing non-cluster administrators to install Operators". These tenants should only have access to a curated catalog of Operators that are known to not have dependencies. These tenants must also be forced to use the same version line of an Operator, to ensure the CRDs do not change. This requires the use of namespace-scoped catalogs and likely disabling the global default catalogs. Additional resources Preparing for multiple instances of an Operator for multitenant clusters Allowing non-cluster administrators to install Operators Disabling the default OperatorHub catalog sources 2.7.3. Operator colocation and Operator groups Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation . 2.8. CRDs 2.8.1. Extending the Kubernetes API with custom resource definitions Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so that custom objects managed by the Operator look and act just like the built-in, native Kubernetes objects. This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing CRDs. 2.8.1.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR. Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin , edit , or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the RBAC policy of the cluster as if it was a built-in resource. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.1.2. Creating a custom resource definition To create custom resource (CR) objects, cluster administrators must first create a custom resource definition (CRD). Prerequisites Access to an OpenShift Container Platform cluster with cluster-admin user privileges. Procedure To create a CRD: Create a YAML file that contains the following field types: Example YAML file for a CRD apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9 1 Use the apiextensions.k8s.io/v1 API. 2 Specify a name for the definition. This must be in the <plural-name>.<group> format using the values from the group and plural fields. 3 Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like Job or ScheduledJob could be in the batch API group (such as batch.api.example.com ). A good practice is to use a fully-qualified-domain name (FQDN) of your organization. 4 Specify a version name to be used in the URL. Each API group can exist in multiple versions, for example v1alpha , v1beta , v1 . 5 Specify whether the custom objects are available to a project ( Namespaced ) or all projects in the cluster ( Cluster ). 6 Specify the plural name to use in the URL. The plural field is the same as a resource in an API URL. 7 Specify a singular name to use as an alias on the CLI and for display. 8 Specify the kind of objects that can be created. The type can be in CamelCase. 9 Specify a shorter string to match your resource on the CLI. Note By default, a CRD is cluster-scoped and available to all projects. Create the CRD object: USD oc create -f <file_name>.yaml A new RESTful API endpoint is created at: /apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/... For example, using the example file, the following endpoint is created: /apis/stable.example.com/v1/namespaces/*/crontabs/... You can now use this endpoint URL to create and manage CRs. The object kind is based on the spec.kind field of the CRD object you created. 2.8.1.3. Creating cluster roles for custom resource definitions Cluster administrators can grant permissions to existing cluster-scoped custom resource definitions (CRDs). If you use the admin , edit , and view default cluster roles, you can take advantage of cluster role aggregation for their rules. Important You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template. Prerequisites Create a CRD. Procedure Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An OpenShift Container Platform controller adds the rules that you specify to the default cluster roles. Example YAML file for a cluster role definition kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" 3 rbac.authorization.k8s.io/aggregate-to-edit: "true" 4 rules: - apiGroups: ["stable.example.com"] 5 resources: ["crontabs"] 6 verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the "view" default role. rbac.authorization.k8s.io/aggregate-to-view: "true" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" 10 rules: - apiGroups: ["stable.example.com"] 11 resources: ["crontabs"] 12 verbs: ["get", "list", "watch"] 13 1 Use the rbac.authorization.k8s.io/v1 API. 2 8 Specify a name for the definition. 3 Specify this label to grant permissions to the admin default role. 4 Specify this label to grant permissions to the edit default role. 5 11 Specify the group name of the CRD. 6 12 Specify the plural name of the CRD that these rules apply to. 7 13 Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the admin and edit roles and only read permission to the view role. 9 Specify this label to grant permissions to the view default role. 10 Specify this label to grant permissions to the cluster-reader default role. Create the cluster role: USD oc create -f <file_name>.yaml 2.8.1.4. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.1.5. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. 2.8.2. Managing resources from custom resource definitions This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs). 2.8.2.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.2.2. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.2.3. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays.
[ "etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml", "annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml", "catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json", "_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }", "#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }", "#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }", "#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }", "#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }", "#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }", "#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317", "name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm alpha generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"", "apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF", "bundle.core.rukpak.io/combo-tag-ref created", "oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'", "Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable", "tree manifests manifests ├── namespace.yaml ├── service_account.yaml ├── cluster_role.yaml ├── cluster_role_binding.yaml └── deployment.yaml", "apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "registry.redhat.io/redhat/redhat-operator-index:v4.13", "registry.redhat.io/redhat/redhat-operator-index:v4.14", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.27 priority: -400 publisher: Example Org", "quay.io/example-org/example-catalog:v1.27", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created", "packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1", "olm.skipRange: <semver_range>", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'", "properties: - type: olm.kubeversion value: version: \"1.16.0\"", "properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'", "type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue", "apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100", "dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"", "attenuated service account query failed - more than one operator group(s) are managing this namespace count=2", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]", "registry.redhat.io/redhat/redhat-operator-index:v4.8", "registry.redhat.io/redhat/redhat-operator-index:v4.9", "apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9", "oc create -f <file_name>.yaml", "/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/", "/apis/stable.example.com/v1/namespaces/*/crontabs/", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13", "oc create -f <file_name>.yaml", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operators/understanding-operators
Chapter 9. Quotas
Chapter 9. Quotas 9.1. Resource quotas per project A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them. 9.1.1. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 9.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 9.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 9.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 9.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 9.1.3. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 9.1.4. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 9.1.5. Sample resource quota definitions core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 services.loadbalancers: "2" 6 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. 6 The total number of services of type LoadBalancer that can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 limits.cpu: "2" 4 limits.memory: 2Gi 5 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 5 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 scopes: - NotTerminating 4 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods fall under NotTerminating unless the RestartNever policy is applied. compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 scopes: - Terminating 4 1 The total number of pods in a terminating state. 2 Across all pods in a terminating state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a terminating state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota charges for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 8 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 9 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. 9.1.6. Creating a quota You can create a quota to constrain resource usage in a given project. Procedure Define the quota in a file. Use the file to create the quota and apply it to a project: USD oc create -f <file> [-n <project_name>] For example: USD oc create -f core-object-counts.yaml -n demoproject 9.1.6.1. Creating object count quotas You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project. Procedure To configure an object count quota for a resource: Run the following command: USD oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1 1 The <resource> variable is the name of the resource, and <group> is the API group, if applicable. Use the oc api-resources command for a list of resources and their associated API groups. For example: USD oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 Example output resourcequota "test" created This example limits the listed resources to the hard limit in each project in the cluster. Verify that the quota was created: USD oc describe quota test Example output Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 9.1.6.2. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure Determine how many GPUs are available on a node in your cluster. For example: # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0 In this example, 2 GPUs are available. Create a ResourceQuota object to set a quota in the namespace nvidia . In this example, the quota is 1 : Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota: # oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml : apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Create the pod: # oc create -f gpu-pod.yaml Verify that the pod is running: # oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: # oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 9.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10 Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 9.1.8. Configuring explicit resource quotas Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Add a resource quota definition to a project request template: If a project request template does not exist in a cluster: Create a bootstrap project template and output it to a file called template.yaml : USD oc adm create-bootstrap-project-template -o yaml > template.yaml Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template: - apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project. 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot create claims. Create a project request template from the modified template.yaml file in the openshift-config namespace: USD oc create -f template.yaml -n openshift-config Note To include the configuration as a kubectl.kubernetes.io/last-applied-configuration annotation, add the --save-config option to the oc create command. By default, the template is called project-request . If a project request template already exists within a cluster: Note If you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead. List templates in the openshift-config namespace: USD oc get templates -n openshift-config Edit an existing project request template: USD oc edit template <project_request_template> -n openshift-config Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template. If you created a project request template, reference it in the cluster's project configuration resource: Access the project configuration resource for editing: By using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . By using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request : apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: project-request Verify that the resource quota is applied when projects are created: Create a project: USD oc new-project <project_name> List the project's resource quotas: USD oc get resourcequotas Describe the resource quota in detail: USD oc describe resourcequotas <resource_quota_name> 9.2. Resource quotas across multiple projects A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. This guide describes how cluster administrators can set and manage resource quotas across multiple projects. 9.2.1. Selecting multiple projects during quota creation When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: USD oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20 This creates the following ClusterResourceQuota object: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: "10" secrets: "20" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total: 5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" 1 The ResourceQuotaSpec object that will be enforced over the selected projects. 2 A simple key-value selector for annotations. 3 A label selector that can be used to select projects. 4 A per-namespace map that describes current quota usage in each selected project. 5 The aggregate usage across all selected projects. This multi-project quota document controls all projects requested by <user_name> using the default project request endpoint. You are limited to 10 pods and 20 secrets. Similarly, to select projects based on labels, run this command: USD oc create clusterresourcequota for-name \ 1 --project-label-selector=name=frontend \ 2 --hard=pods=10 --hard=secrets=20 1 Both clusterresourcequota and clusterquota are aliases of the same command. for-name is the name of the ClusterResourceQuota object. 2 To select projects by label, provide a key-value pair by using the format --project-label-selector=key=value . This creates the following ClusterResourceQuota object definition: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend 9.2.2. Viewing applicable cluster resource quotas A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the AppliedClusterResourceQuota resource. Procedure To view quotas applied to a project, run: USD oc describe AppliedClusterResourceQuota Example output Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20 9.2.3. Selection granularity Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects.
[ "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/quotas
Chapter 17. Importing and Exporting Realms
Chapter 17. Importing and Exporting Realms In this chapter, you are going to understand the different approaches for importing and exporting realms using JSON files. Note Exporting and importing into single files can produce large files, so if your database contains more than 500 users, export to a directory and not a single file. Using a directory performs better as the directory provider uses a separate transaction for each "page" (a file of users). The default count of users per file and per transaction is fifty. Increasing this to a larger number leads to an exponentially increasing execution time. 17.1. Providing options for database connection parameters When using the export and the import commands below, Red Hat build of Keycloak needs to know how to connect to the database where the information about realms, clients, users and other entities is stored. As described in Configuring Red Hat build of Keycloak that information can be provided as command line parameters, environment variables or a configuration file. Use the --help command line option for each command to see the available options. Some of the configuration options are build time configuration options. As default, Red Hat build of Keycloak will re-build automatically for the export and import commands if it detects a change of a build time parameter. If you have built an optimized version of Red Hat build of Keycloak with the build command as outlined in Configuring Red Hat build of Keycloak , use the command line option --optimized to have Keycloak skip the build check for a faster startup time. When doing this, remove the build time options from the command line and keep only the runtime options. 17.2. Exporting a Realm to a Directory To export a realm, you can use the export command. Your Red Hat build of Keycloak server instance must not be started when invoking this command. bin/kc.[sh|bat] export --help To export a realm to a directory, you can use the --dir <dir> option. bin/kc.[sh|bat] export --dir <dir> When exporting realms to a directory, the server is going to create separate files for each realm being exported. 17.2.1. Configuring how users are exported You are also able to configure how users are going to be exported by setting the --users <strategy> option. The values available for this option are: different_files : Users export into different json files, depending on the maximum number of users per file set by --users-per-file . This is the default value. skip : Skips exporting users. realm_file : Users will be exported to the same file as the realm settings. For a realm named "foo", this would be "foo-realm.json" with realm data and users. same_file : All users are exported to one explicit file. So you will get two json files for a realm, one with realm data and one with users. If you are exporting users using the different_files strategy, you can set how many users per file you want by setting the --users-per-file option. The default value is 50 . bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100 17.3. Exporting a Realm to a File To export a realm to a file, you can use the --file <file> option. bin/kc.[sh|bat] export --file <file> When exporting realms to a file, the server is going to use the same file to store the configuration for all the realms being exported. 17.4. Exporting a specific realm If you do not specify a specific realm to export, all realms are exported. To export a single realm, you can use the --realm option as follows: bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm 17.5. Importing a Realm from a Directory To import a realm, you can use the import command. Your Red Hat build of Keycloak server instance must not be started when invoking this command. bin/kc.[sh|bat] import --help After exporting a realm to a directory, you can use the --dir <dir> option to import the realm back to the server as follows: bin/kc.[sh|bat] import --dir <dir> When importing realms using the import command, you are able to set if existing realms should be skipped, or if they should be overridden with the new configuration. For that, you can set the --override option as follows: bin/kc.[sh|bat] import --dir <dir> --override false By default, the --override option is set to true so that realms are always overridden with the new configuration. 17.6. Importing a Realm from a File To import a realm previously exported in a single file, you can use the --file <file> option as follows: bin/kc.[sh|bat] import --file <file> 17.7. Importing a Realm during Startup You are also able to import realms when the server is starting by using the --import-realm option. bin/kc.[sh|bat] start --import-realm When you set the --import-realm option, the server is going to try to import any realm configuration file from the data/import directory. Only regular files using the .json extension are read from this directory, sub-directories are ignored. Note For the Red Hat build of Keycloak containers, the import directory is /opt/keycloak/data/import If a realm already exists in the server, the import operation is skipped. The main reason behind this behavior is to avoid re-creating realms and potentially loose state between server restarts. To re-create realms you should explicitly run the import command prior to starting the server. Importing the master realm is not supported because as it is a very sensitive operation. 17.7.1. Using Environment Variables within the Realm Configuration Files When importing a realm at startup, you are able to use placeholders to resolve values from environment variables for any realm configuration. Realm configuration using placeholders { "realm": "USD{MY_REALM_NAME}", "enabled": true, ... } In the example above, the value set to the MY_REALM_NAME environment variable is going to be used to set the realm property.
[ "bin/kc.[sh|bat] export --help", "bin/kc.[sh|bat] export --dir <dir>", "bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100", "bin/kc.[sh|bat] export --file <file>", "bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm", "bin/kc.[sh|bat] import --help", "bin/kc.[sh|bat] import --dir <dir>", "bin/kc.[sh|bat] import --dir <dir> --override false", "bin/kc.[sh|bat] import --file <file>", "bin/kc.[sh|bat] start --import-realm", "{ \"realm\": \"USD{MY_REALM_NAME}\", \"enabled\": true, }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/importexport-
Chapter 15. Setting up client access to a Kafka cluster
Chapter 15. Setting up client access to a Kafka cluster After you have deployed Streams for Apache Kafka , you can set up client access to your Kafka cluster. To verify the deployment, you can deploy example producer and consumer clients. Otherwise, create listeners that provide client access within or outside the OpenShift cluster. 15.1. Deploying example clients Send and receive messages from a Kafka cluster installed on OpenShift. This procedure describes how to deploy Kafka clients to the OpenShift cluster, then produce and consume messages to test your installation. The clients are deployed using the Kafka container image. Prerequisites The Kafka cluster is available for the clients. Procedure Deploy a Kafka producer. This example deploys a Kafka producer that connects to the Kafka cluster my-cluster . A topic named my-topic is created. Deploying a Kafka producer to OpenShift oc run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic Type a message into the console where the producer is running. Press Enter to send the message. Deploy a Kafka consumer. The consumer should consume messages produced to my-topic in the Kafka cluster my-cluster . Deploying a Kafka consumer to OpenShift oc run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. 15.2. Configuring listeners to connect to Kafka Use listeners to enable client connections to Kafka. Streams for Apache Kafka provides a generic GenericKafkaListener schema with properties to configure listeners through the Kafka resource. When configuring a Kafka cluster, you specify a listener type based on your requirements, environment, and infrastructure. Services, routes, load balancers, and ingresses for clients to connect to a cluster are created according to the listener type. Internal and external listener types are supported. Internal listeners Use internal listener types to connect clients within a kubernetes cluster. internal to connect within the same OpenShift cluster cluster-ip to expose Kafka using per-broker ClusterIP services Internal listeners use a headless service and the DNS names assigned to the broker pods. By default, they do not use the OpenShift service DNS domain (typically .cluster.local ). However, you can customize this configuration using the useServiceDnsDomain property. Consider using a cluster-ip type listener if routing through the headless service isn't feasible or if you require a custom access mechanism, such as when integrating with specific Ingress controllers or the OpenShift Gateway API. External listeners Use external listener types to connect clients outside an OpenShift cluster. nodeport to use ports on OpenShift nodes loadbalancer to use loadbalancer services ingress to use Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes (Kubernetes only) route to use OpenShift Route and the default HAProxy router (OpenShift only) External listeners handle access to a Kafka cluster from networks that require different authentication mechanisms. For example, loadbalancers might not be suitable for certain infrastructure, such as bare metal, where node ports provide a better option. Important Do not use the built-in ingress controller on OpenShift, use the route type instead. The Ingress NGINX Controller is only intended for use on Kubernetes. The route type is only supported on OpenShift. Each listener is defined as an array in the Kafka resource. Example listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # ... You can configure as many listeners as required, as long as their names and ports are unique. You can also configure listeners for secure connection using authentication. Note If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration. Additional resources GenericKafkaListener schema reference 15.3. Listener naming conventions From the listener configuration, the resulting listener bootstrap and per-broker service names are structured according to the following naming conventions: Table 15.1. Listener naming conventions Listener type Bootstrap service name Per-Broker service name internal <cluster_name>-kafka-bootstrap Not applicable loadbalancer nodeport ingress route cluster-ip <cluster_name>-kafka-<listener-name>-bootstrap <cluster_name>-kafka-<listener-name>-<idx> For example, my-cluster-kafka-bootstrap , my-cluster-kafka-external1-bootstrap , and my-cluster-kafka-external1-0 . The names are assigned to the services, routes, load balancers, and ingresses created through the listener configuration. You can use certain backwards compatible names and port numbers to transition listeners initially configured under the retired KafkaListeners schema. The resulting external listener naming convention varies slightly. The specific combinations of listener name and port configuration values in the following table are backwards compatible. Table 15.2. Backwards compatible listener name and port combinations Listener name Port Bootstrap service name Per-Broker service name plain 9092 <cluster_name>-kafka-bootstrap Not applicable tls 9093 <cluster-name>-kafka-bootstrap Not applicable external 9094 <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap-<idx> 15.4. Accessing Kafka using node ports Use node ports to access a Kafka cluster from an external client outside the OpenShift cluster. To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption. The procedure shows basic nodeport listener configuration. You can use listener properties to enable TLS encryption ( tls ) and specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the following configuration properties with nodeport listeners: preferredNodePortAddressType Specifies the first address type that's checked as the node address. externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. nodePort Overrides the assigned node port numbers for the bootstrap and broker services. For more information on listener configuration, see the GenericKafkaListener schema reference . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external4 . Procedure Configure a Kafka resource with an external listener set to the nodeport type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is created in the secret my-cluster-cluster-ca-cert . NodePort type services are created for each Kafka broker, as well as an external bootstrap service. Node port services created for the bootstrap and brokers NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP The bootstrap address used for client connection is propagated to the status of the Kafka resource. Example status for the bootstrap address status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # ... - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 # ... Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external4")].bootstrapServers}{"\n"}' ip-10-0-224-199.us-west-2.compute.internal:32650 Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example, ip-10-0-224-199.us-west-2.compute.internal:32650 . Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 15.5. Accessing Kafka using loadbalancers Use loadbalancers to access a Kafka cluster from an external client outside the OpenShift cluster. To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption. The procedure shows basic loadbalancer listener configuration. You can use listener properties to enable TLS encryption ( tls ) and specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the following configuration properties with loadbalancer listeners: loadBalancerSourceRanges Restricts traffic to a specified list of CIDR (Classless Inter-Domain Routing) ranges. externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. loadBalancerIP Requests a specific IP address when creating a loadbalancer. For more information on listener configuration, see the GenericKafkaListener schema reference . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external3 . Procedure Configure a Kafka resource with an external listener set to the loadbalancer type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is also created in the secret my-cluster-cluster-ca-cert . loadbalancer type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service. Loadbalancer services and loadbalancers created for the bootstraps and brokers NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com The bootstrap address used for client connection is propagated to the status of the Kafka resource. Example status for the bootstrap address status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # ... - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 # ... The DNS addresses used for client connection are propagated to the status of each loadbalancer service. Example status for the bootstrap loadbalancer status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com # ... Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external3")].bootstrapServers}{"\n"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example, a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 . Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 15.6. Accessing Kafka using OpenShift routes Use OpenShift routes to access a Kafka cluster from clients outside the OpenShift cluster. To be able to use routes, add configuration for a route type listener in the Kafka custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services. To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for TLS encryption. For access using routes, the port is always 443. Warning An OpenShift route address comprises the Kafka cluster name, the listener name, the project name, and the domain of the router. For example, my-cluster-kafka-external1-bootstrap-my-project.domain.com (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>.<domain>). Each DNS label (between periods ".") must not exceed 63 characters, and the total length of the address must not exceed 255 characters. The procedure shows basic listener configuration. TLS encryption ( tls ) must be enabled. You can also specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the host configuration property with route listeners to specify the hostnames used by the bootstrap and per-broker services. For more information on listener configuration, see the GenericKafkaListener schema reference . TLS passthrough TLS passthrough is enabled for routes created by Streams for Apache Kafka. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, Streams for Apache Kafka uses TLS passthrough with Server Name Indication (SNI). SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey property . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external1 . Procedure Configure a Kafka resource with an external listener set to the route type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # ... # ... zookeeper: # ... 1 For route type listeners, TLS encryption must be enabled ( true ). Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is created in the secret my-cluster-cluster-ca-cert . ClusterIP type services are created for each Kafka broker, as well as an external bootstrap service. A route is also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router. The routes are preconfigured with TLS passthrough. Routes created for the bootstraps and brokers NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough The DNS addresses used for client connection are propagated to the status of each route. Example status for the bootstrap route status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com # ... Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL s_client . openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts The server name is the Server Name Indication (SNI) for passing the connection to the broker. If the connection is successful, the certificates for the broker are returned. Certificates for the broker Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0 Retrieve the address of the bootstrap service from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external1")].bootstrapServers}{"\n"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443 The address comprises the Kafka cluster name, the listener name, the project name and the domain of the router ( router.com in this example). Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 15.7. Discovering connection details for clients Service discovery makes it easier for client applications running in the same OpenShift cluster as Streams for Apache Kafka to interact with a Kafka cluster. A service discovery label and annotation are created for the following services: Internal Kafka bootstrap service Kafka Bridge service Service discovery label The service discovery label, strimzi.io/discovery , is set to true for Service resources to make them discoverable for client connections. Service discovery annotation The service discovery annotation provides connection details in JSON format for each service for client applications to use to establish connections. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example Kafka Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service Find services by specifying the discovery label when fetching services from the command line or a corresponding API call. Returning services using the discovery label oc get service -l strimzi.io/discovery=true Connection details are returned when retrieving the service discovery label.
[ "run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic", "run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external4\")].bootstrapServers}{\"\\n\"}' ip-10-0-224-199.us-west-2.compute.internal:32650", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com", "status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.9.0 listeners: # - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.9 #", "status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com #", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external3\")].bootstrapServers}{\"\\n\"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # # zookeeper: #", "apply -f <kafka_configuration_file>", "NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough", "status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com #", "openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts", "Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external1\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443", "get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/deploy-client-access-str
21.2. XML Representation of the Roles Collection
21.2. XML Representation of the Roles Collection Example 21.1. An XML representation of the roles collection
[ "<roles> <role id=\"00000000-0000-0000-0000-000000000001\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0000-000000000001\"> <name>SuperUser</name> <description>Roles management administrator</description> <link rel=\"permits\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0000-000000000001/permits\"/> <mutable>false</mutable> <administrative>true</administrative> </role> <role id=\"00000000-0000-0000-0001-000000000001\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0001-000000000001\"> <name>RHEVMUser</name> <description>RHEVM user</description> <link rel=\"permits\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0001-000000000001/permits\"/> <mutable>false</mutable> <administrative>false</administrative> </role> <role id=\"00000000-0000-0000-0001-000000000002\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0001-000000000002\"> <name>RHEVMPowerUser</name> <description>RHEVM power user</description> <link rel=\"permits\" href=\"/ovirt-engine/api/roles/00000000-0000-0000-0001-000000000002/permits\"/> <mutable>false</mutable> <administrative>false</administrative> </role> </roles>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_the_roles_collection
Chapter 10. File-based configuration
Chapter 10. File-based configuration AMQ JavaScript can read the configuration options used to establish connections from a local file named connect.json . This enables you to configure connections in your application at the time of deployment. The library attempts to read the file when the application calls the container connect method without supplying any connection options. 10.1. File locations If set, AMQ JavaScript uses the value of the MESSAGING_CONNECT_FILE environment variable to locate the configuration file. If MESSAGING_CONNECT_FILE is not set, AMQ JavaScript searches for a file named connect.json at the following locations and in the order shown. It stops at the first match it encounters. On Linux: USDPWD/connect.json , where USDPWD is the current working directory of the client process USDHOME/.config/messaging/connect.json , where USDHOME is the current user home directory /etc/messaging/connect.json On Windows: %cd%/connect.json , where %cd% is the current working directory of the client process If no connect.json file is found, the library uses default values for all options. 10.2. The file format The connect.json file contains JSON data, with additional support for JavaScript comments. All of the configuration attributes are optional or have default values, so a simple example need only provide a few details: Example: A simple connect.json file { "host": "example.com", "user": "alice", "password": "secret" } SASL and SSL/TLS options are nested under "sasl" and "tls" namespaces: Example: A connect.json file with SASL and SSL/TLS options { "host": "example.com", "user": "ortega", "password": "secret", "sasl": { "mechanisms": ["SCRAM-SHA-1", "SCRAM-SHA-256"] }, "tls": { "cert": "/home/ortega/cert.pem", "key": "/home/ortega/key.pem" } } 10.3. Configuration options The option keys containing a dot (.) represent attributes nested inside a namespace. Table 10.1. Configuration options in connect.json Key Value type Default value Description scheme string "amqps" "amqp" for cleartext or "amqps" for SSL/TLS host string "localhost" The hostname or IP address of the remote host port string or number "amqps" A port number or port literal user string None The user name for authentication password string None The password for authentication sasl.mechanisms list or string None (system defaults) A JSON list of enabled SASL mechanisms. A bare string represents one mechanism. If none are specified, the client uses the default mechanisms provided by the system. sasl.allow_insecure boolean false Enable mechanisms that send cleartext passwords tls.cert string None The filename or database ID of the client certificate tls.key string None The filename or database ID of the private key for the client certificate tls.ca string None The filename, directory, or database ID of the CA certificate tls.verify boolean true Require a valid server certificate with a matching hostname
[ "{ \"host\": \"example.com\", \"user\": \"alice\", \"password\": \"secret\" }", "{ \"host\": \"example.com\", \"user\": \"ortega\", \"password\": \"secret\", \"sasl\": { \"mechanisms\": [\"SCRAM-SHA-1\", \"SCRAM-SHA-256\"] }, \"tls\": { \"cert\": \"/home/ortega/cert.pem\", \"key\": \"/home/ortega/key.pem\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/file_based_configuration
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, Enabling Linux control group version 1 (cgroup v1) . 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack .
[ "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_openstack/installing-openstack-nfv-preparing
Authorization APIs
Authorization APIs OpenShift Container Platform 4.18 Reference guide for authorization APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/index
Chapter 9. CSISnapshotController [operator.openshift.io/v1]
Chapter 9. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 9.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 9.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 9.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 9.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 9.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 9.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 9.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/csisnapshotcontrollers DELETE : delete collection of CSISnapshotController GET : list objects of kind CSISnapshotController POST : create a CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} DELETE : delete a CSISnapshotController GET : read the specified CSISnapshotController PATCH : partially update the specified CSISnapshotController PUT : replace the specified CSISnapshotController /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status GET : read status of the specified CSISnapshotController PATCH : partially update status of the specified CSISnapshotController PUT : replace status of the specified CSISnapshotController 9.2.1. /apis/operator.openshift.io/v1/csisnapshotcontrollers Table 9.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSISnapshotController Table 9.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CSISnapshotController Table 9.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.5. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotControllerList schema 401 - Unauthorized Empty HTTP method POST Description create a CSISnapshotController Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 202 - Accepted CSISnapshotController schema 401 - Unauthorized Empty 9.2.2. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name} Table 9.9. Global path parameters Parameter Type Description name string name of the CSISnapshotController Table 9.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSISnapshotController Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.12. Body parameters Parameter Type Description body DeleteOptions schema Table 9.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSISnapshotController Table 9.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.15. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSISnapshotController Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.17. Body parameters Parameter Type Description body Patch schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSISnapshotController Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty 9.2.3. /apis/operator.openshift.io/v1/csisnapshotcontrollers/{name}/status Table 9.22. Global path parameters Parameter Type Description name string name of the CSISnapshotController Table 9.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CSISnapshotController Table 9.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.25. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CSISnapshotController Table 9.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.27. Body parameters Parameter Type Description body Patch schema Table 9.28. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CSISnapshotController Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.30. Body parameters Parameter Type Description body CSISnapshotController schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK CSISnapshotController schema 201 - Created CSISnapshotController schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/csisnapshotcontroller-operator-openshift-io-v1
Security and compliance
Security and compliance OpenShift Container Platform 4.13 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/index
Chapter 5. Installer-provisioned postinstallation configuration
Chapter 5. Installer-provisioned postinstallation configuration After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures. 5.1. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.13.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes. USD oc apply -f 99-master-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes. USD oc apply -f 99-worker-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created Check the status of the applied NTP settings. USD oc describe machineconfigpool 5.2. Enabling a provisioning network after installation The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the baremetal network. You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO). Prerequisites A dedicated physical network must exist, connected to all worker and control plane nodes. You must isolate the native, untagged physical network. The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed . You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting. Procedure When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1 . Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file: USD oc get provisioning -o yaml > enable-provisioning-nw.yaml Modify the provisioning CR file: USD vim ~/enable-provisioning-nw.yaml Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed . Then, add the provisioningIP , provisioningNetworkCIDR , provisioningDHCPRange , provisioningInterface , and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting. apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6 1 The provisioningNetwork is one of Managed , Unmanaged , or Disabled . When set to Managed , Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged , the system administrator configures the DHCP server manually. 2 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled . The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. 3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled . For example: 192.168.0.1/24 . 4 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled . For example: 192.168.0.64, 192.168.0.253 . 5 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled . Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead. 6 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false . Save the changes to the provisioning CR file. Apply the provisioning CR file to the cluster: USD oc apply -f enable-provisioning-nw.yaml 5.3. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 5.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 5.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 5.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 5.3.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
[ "sudo dnf -y install butane", "variant: openshift version: 4.13.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "oc apply -f 99-master-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created", "oc apply -f 99-worker-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created", "oc describe machineconfigpool", "oc get provisioning -o yaml > enable-provisioning-nw.yaml", "vim ~/enable-provisioning-nw.yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6", "oc apply -f enable-provisioning-nw.yaml", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-post-installation-configuration
Chapter 52. MongoDB Source
Chapter 52. MongoDB Source Consume documents from MongoDB. If the persistentTailTracking option will be enabled, the consumer will keep track of the last consumed message and on the restart, the consumption will restart from that message. In case of persistentTailTracking enabled, the tailTrackIncreasingField must be provided (by default it is optional). If the persistentTailTracking option won't be enabled, the consumer will consume the whole collection and wait in idle for new documents to consume. 52.1. Configuration Options The following table summarizes the configuration options available for the mongodb-source Kamelet: Property Name Description Type Default Example collection * MongoDB Collection Sets the name of the MongoDB collection to bind to this endpoint. string database * MongoDB Database Sets the name of the MongoDB database to target. string hosts * MongoDB Hosts Comma separated list of MongoDB Host Addresses in host:port format. string password * MongoDB Password User password for accessing MongoDB. string username * MongoDB Username Username for accessing MongoDB. The username must be present in the MongoDB's authentication database (authenticationDatabase). By default, the MongoDB authenticationDatabase is 'admin'. string persistentTailTracking MongoDB Persistent Tail Tracking Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. boolean false tailTrackIncreasingField MongoDB Tail Track Increasing Field Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. string Note Fields marked with an asterisk (*) are mandatory. 52.2. Dependencies At runtime, the mongodb-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:mongodb camel:jackson 52.3. Usage This section describes how you can use the mongodb-source . 52.3.1. Knative Source You can use the mongodb-source Kamelet as a Knative source by binding it to a Knative object. mongodb-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 52.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 52.3.1.2. Procedure for using the cluster CLI Save the mongodb-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f mongodb-source-binding.yaml 52.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 52.3.2. Kafka Source You can use the mongodb-source Kamelet as a Kafka source by binding it to a Kafka topic. mongodb-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 52.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 52.3.2.2. Procedure for using the cluster CLI Save the mongodb-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f mongodb-source-binding.yaml 52.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 52.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mongodb-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\" password: \"The MongoDB Password\" username: \"The MongoDB Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f mongodb-source-binding.yaml", "kamel bind mongodb-source -p \"source.collection=The MongoDB Collection\" -p \"source.database=The MongoDB Database\" -p \"source.hosts=The MongoDB Hosts\" -p \"source.password=The MongoDB Password\" -p \"source.username=The MongoDB Username\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\" password: \"The MongoDB Password\" username: \"The MongoDB Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f mongodb-source-binding.yaml", "kamel bind mongodb-source -p \"source.collection=The MongoDB Collection\" -p \"source.database=The MongoDB Database\" -p \"source.hosts=The MongoDB Hosts\" -p \"source.password=The MongoDB Password\" -p \"source.username=The MongoDB Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/mongodb-source
Chapter 2. Administering hosts
Chapter 2. Administering hosts This chapter describes creating, registering, administering, and removing hosts. 2.1. Creating a host in Red Hat Satellite Use this procedure to create a host in Red Hat Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . On the Host tab, enter the required details. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. On the Puppet Classes tab, select the Puppet classes you want to include. On the Interfaces tab: For each interface, click Edit in the Actions column and configure the following settings as required: Type - For a Bond or BMC interface, use the Type list and select the interface type. MAC address - Enter the MAC address. DNS name - Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN. Domain - Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets. IPv4 Subnet - Select an IPv4 subnet for the host from the list. IPv6 Subnet - Select an IPv6 subnet for the host from the list. IPv4 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not manage DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations. IPv6 address - If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. Managed - Select this checkbox to configure the interface during provisioning to use the Capsule provided DHCP and DNS services. Primary - Select this checkbox to use the DNS name from this interface as the host portion of the FQDN. Provision - Select this checkbox to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading packages by anaconda or Puppet setup in a %post script, will use the primary interface. Virtual NIC - Select this checkbox if this interface is not a physical device. This setting has two options: Tag - Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet. Attached to - Enter the device name of the interface this virtual interface is attached to. Click OK to save the interface configuration. Optionally, click Add Interface to include an additional network interface. For more information, see Chapter 5, Adding network interfaces . Click Submit to apply the changes and exit. On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection . If you want to use non Red Hat operating systems, select All Media , then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both. On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter . When you create a Red Hat Enterprise Linux 8 host, you can set system purpose attributes. System purpose attributes define what subscriptions to attach automatically on host creation. In the Host Parameters area, enter the following parameter names with the corresponding values. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . syspurpose_role syspurpose_sla syspurpose_usage syspurpose_addons If you want to create a host with pull mode for remote job execution, add the enable-remote-execution-pull parameter with type boolean set to true . For more information, see Section 12.4, "Transport modes for remote execution" . On the Additional Information tab, enter additional information about the host. Click Submit to complete your provisioning request. CLI procedure To create a host associated to a host group, enter the following command: This command prompts you to specify the root password. It is required to specify the host's IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the --subnet , and --domain parameters. You can set additional interfaces using the --interface option, which accepts a list of key-value pairs. For the list of available interface settings, enter the hammer host create --help command. 2.2. Cloning hosts You can clone existing hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . In the Actions menu, click Clone . On the Host tab, ensure to provide a Name different from the original host. On the Interfaces tab, ensure to provide a different IP address. Click Submit to clone the host. For more information, see Section 2.1, "Creating a host in Red Hat Satellite" . 2.3. Associating a virtual machine with Satellite from a hypervisor Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a compute resource. On the Virtual Machines tab, click Associate VM from the Actions menu. 2.4. Editing the system purpose of a host You can edit the system purpose attributes for a Red Hat Enterprise Linux host. System purpose allows you to set the intended use of a system on your network and improves reporting accuracy in the Subscriptions service of the Red Hat Hybrid Cloud Console. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The host that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Overview tab, click Edit on the System purpose card. Select the system purpose attributes for your host. Click Save . CLI procedure Log in to the host and edit the required system purpose attributes. For example, set the usage type to Production , the role to Red Hat Enterprise Linux Server , and add the addon add on. For the list of values, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Verify the system purpose attributes for this host: Automatically attach subscriptions to this host: Verify the system purpose status for this host: 2.5. Editing the system purpose of multiple hosts You can edit the system purpose attributes of Red Hat Enterprise Linux hosts. System purpose attributes define which subscriptions to attach automatically to hosts. For more information about system purpose, see Configuring System Purpose using the subscription-manager command-line tool in Automatically installing RHEL 8 . Prerequisites The hosts that you want to edit must be registered with the subscription-manager. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select Red Hat Enterprise Linux 8 hosts that you want to edit. Click the Select Action list and select Manage System Purpose . Select the system purpose attributes that you want to assign to the selected hosts. You can select one of the following values: A specific attribute to set an all selected hosts. No Change to keep the attribute set on the selected hosts. None (Clear) to clear the attribute on the selected hosts. Click Assign . In the Satellite web UI, navigate to Hosts > Content Hosts and select the same Red Hat Enterprise Linux 8 hosts to automatically attach subscriptions based on the system purpose. Click the Select Action list and select Manage Subscriptions . Click Auto-Attach to attach subscriptions to all selected hosts automatically based on their system role. 2.6. Changing a module stream for a host If you have a host running Red Hat Enterprise Linux 8, you can modify the module stream for the repositories you install. You can enable, disable, install, update, and remove module streams from your host in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the Content tab, then click the Module streams tab. Click the vertical ellipsis to the module and select the action you want to perform. You get a REX job notification once the remote execution job is complete. 2.7. Enabling custom repositories on content hosts As a Simple Content Access (SCA) user, you can enable all custom repositories on content hosts using the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select a host. Select the Content tab, then select Repository sets . From the dropdown, you can filter the Repository type column to Custom . Select the desired number of repositories or click the Select All checkbox to select all repositories, then click the vertical ellipsis, and select Override to Enabled . 2.8. Changing the content source of a host A content source is a Capsule that a host consumes content from. Use this procedure to change the content source for a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis icon to the Edit button and select Change content source . Select Content Source , Lifecycle Content View , and Content Source from the lists. Click Change content source . Note Some lifecycle environments can be unavailable for selection if they are not synced on the selected content source. For more information, see Adding lifecycle environments to Capsule Servers in Managing content . You can either complete the content source change using remote execution or manually. To update configuration on host using remote execution, click Run job invocation . For more information about running remote execution jobs, see Configuring and Setting up Remote Jobs . To update the content source manually, execute the autogenerated commands from Change content source on the host. 2.9. Changing the environment of a host Use this procedure to change the environment of a host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click the vertical ellipsis in the Content view details card and select Edit content view assignment . Select the environment. Select the content view. Click Save . 2.10. Changing the managed status of a host Hosts provisioned by Satellite are Managed by default. When a host is set to Managed, you can configure additional host parameters from Satellite Server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it. If you need to obtain reports about configuration management on systems using an operating system not supported by Satellite, set the host to Unmanaged. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Click Manage host or Unmanage host to change the host's status. Click Submit . 2.11. Enabling Tracer on a host Use this procedure to enable Tracer on Satellite and access Traces. Tracer displays a list of services and applications that need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Remote execution is enabled. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. On the Traces tab, click Enable Traces . Select the provider to install katello-host-tools-tracer from the list. Click Enable Tracer . You get a REX job notification after the remote execution job is complete. 2.12. Restarting applications on a host Use this procedure to restart applications from the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the hosts you want to modify. Select the Traces tab. Select applications that you want to restart. Select Restart via remote execution from the Restart app list. You will get a REX job notification once the remote execution job is complete. 2.13. Assigning a host to a specific organization Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in Administering Red Hat Satellite . Note If your host is already registered with a different organization, you must first unregister the host before assigning it to a new organization. To unregister the host, run subscription-manager unregister on the host. After you assign the host to a new organization, you can re-register the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Organization . A new option window opens. From the Select Organization list, select the organization that you want to assign your host to. Select the checkbox Fix Organization on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.14. Assigning a host to a specific location Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in Managing content . Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox of the host you want to change. From the Select Action list, select Assign Location . A new option window opens. Navigate to the Select Location list and choose the location that you want for your host. Select the checkbox Fix Location on Mismatch . Note A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings. Click Submit . 2.15. Switching between hosts When you are on a particular host in the Satellite web UI, you can navigate between hosts without leaving the page by using the host switcher. Click ⇄ to the hostname. This displays a list of hosts in alphabetical order with a pagination arrow and a search bar to find the host you are looking for. 2.16. Viewing host details from a content host Use this procedure to view the host details page from a content host. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts Click the content host you want to view. Select the Details tab to see the host details page. The cards in the Details tab show details for the System properties , BIOS , Networking interfaces , Operating system , Provisioning templates , and Provisioning . Registered content hosts show additional cards for Registration details , Installed products , and HW properties providing information about Model , Number of CPU(s) , Sockets , Cores per socket , and RAM . 2.17. Selecting host columns You can select what columns you want to see in the host table on the Hosts > All Hosts page. For a complete list of host columns, see Appendix D, Overview of the host columns . Note It is not possible to deselect the Name column. The Name column serves as a primary identification method of the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select columns that you want to display. You can select individual columns or column categories. Selecting or deselecting a category selects or deselects all columns in that category. Note Some columns are included in more than one category, but you can display a column of a specific type only once. By selecting or deselecting a specific column, you select or deselect all instances of that column. Verification You can now see the selected columns in the host table. 2.18. Removing a host from Satellite Use this procedure to remove a host from Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts or Hosts > Content Hosts . Note that there is no difference from what page you remove a host, from All Hosts or Content Hosts . In both cases, Satellite removes a host completely. Select the hosts that you want to remove. From the Select Action list, select Delete Hosts . Click Submit to remove the host from Satellite permanently. Warning By default, the Destroy associated VM on host delete setting is set to no . If a host record that is associated with a virtual machine is deleted, the virtual machine will remain on the compute resource. To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab. Setting Destroy associated VM on host delete to yes deletes the virtual machine if the host record that is associated with the virtual machine is deleted. To avoid deleting the virtual machine in this situation, disassociate the virtual machine from Satellite without removing it from the compute resource or change the setting. CLI procedure Delete your host from Satellite: Alternatively, you can use --name My_Host_Name instead of --id My_Host_ID . 2.18.1. Disassociating a virtual machine from Satellite without removing it from a hypervisor Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select the checkbox to the left of the hosts that you want to disassociate. From the Select Action list, click Disassociate Hosts . Optional: Select the checkbox to keep the hosts for future action. Click Submit . 2.19. Lifecycle status of RHEL hosts Satellite provides multiple mechanisms to display information about upcoming End of Support (EOS) events for your Red Hat Enterprise Linux hosts: Notification banner A column on the Hosts index page Alert on the Hosts index page for each host that runs Red Hat Enterprise Linux with an upcoming EOS event in a year as well as when support has ended Ability to Search for hosts by EOS on the Hosts index page Host status card on the host details page For any hosts that are not running Red Hat Enterprise Linux, Satellite displays Unknown in the RHEL Lifecycle status and Last report columns. EOS notification banner When either the end of maintenance support or the end of extended lifecycle support approaches in a year, you will see a notification banner in the Satellite web UI if you have hosts with that Red Hat Enterprise Linux version. The notification provides information about the Red Hat Enterprise Linux version, the number of hosts running that version in your environment, the lifecycle support, and the expiration date. Along with other information, the Red Hat Enterprise Linux lifecycle column is visible in the notification. 2.19.1. Displaying RHEL lifecycle status You can display the status of the end of support (EOS) for your Red Hat Enterprise Linux hosts in the table on the Hosts index page. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select the Content column to expand it. Select RHEL Lifecycle status . Click Save to generate a new column that displays the Red Hat Enterprise Linux lifecycle status. 2.19.2. Host search by RHEL lifecycle status You can use the Search field to search hosts by rhel_lifecycle_status . It can have one of the following values: full_support maintenance_support approaching_end_of_maintenance extended_support approaching_end_of_support support_ended
[ "hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"", "subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '", "subscription-manager syspurpose", "subscription-manager attach --auto", "subscription-manager status", "hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/administering_hosts_managing-hosts
25.7.3. Enabling Encrypted Transport
25.7.3. Enabling Encrypted Transport Confidentiality and integrity in network transmissions can be provided by either the TLS or GSSAPI encryption protocol. Transport Layer Security (TLS) is a cryptographic protocol designed to provide communication security over the network. When using TLS, rsyslog messages are encrypted before sending, and mutual authentication exists between the sender and receiver. Generic Security Service API (GSSAPI) is an application programming interface for programs to access security services. To use it in connection with rsyslog you must have a functioning Kerberos environment.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-enabling_encrypted_transport
Chapter 3. Adding client dependencies to your Maven project
Chapter 3. Adding client dependencies to your Maven project If you are developing Java-based Kafka clients, you can add the Red Hat dependencies for Kafka clients, including Kafka Streams, to the pom.xml file of your Maven project. Only client libraries built by Red Hat are supported for Streams for Apache Kafka. You can add the following artifacts as dependencies: kafka-clients Contains the Kafka Producer , Consumer , and AdminClient APIs. The Producer API enables applications to send data to a Kafka broker. The Consumer API enables applications to consume data from a Kafka broker. The AdminClient API provides functionality for managing Kafka clusters, including topics, brokers, and other components. kafka-streams Contains the KafkaStreams API. Kafka Streams enables applications to receive data from one or more input streams. You can use this API to run a sequence of real-time operations on streams of data, like mapping, filtering, and joining. You can use Kafka Streams to write results into one or more output streams. It is part of the kafka-streams JAR package that is available in the Red Hat Maven repository. 3.1. Adding a Kafka clients dependency to your Maven project Add a Red Hat dependency for Kafka clients to your Maven project. Prerequisites A Maven project with an existing pom.xml . Procedure Add the Red Hat Maven repository to the <repositories> section of the pom.xml file of your Maven project. Add kafka-clients as a <dependency> to the pom.xml file of your Maven project. Build the Maven project to add the Kafka client dependency to the project. 3.2. Adding a Kafka Streams dependency to your Maven project Add a Red Hat dependency for Kafka Streams to your Maven project. Prerequisites A Maven project with an existing pom.xml . Procedure Add the Red Hat Maven repository to the <repositories> section of the pom.xml file of your Maven project. Add kafka-streams as a <dependency> to the pom.xml file of your Maven project. Build the Maven project to add the Kafka Streams dependency to the project. 3.3. Adding an OAuth 2.0 dependency to your Maven project Add a Red Hat dependency for OAuth 2.0 to your Maven project. Prerequisites A Maven project with an existing pom.xml . Procedure Add the Red Hat Maven repository to the <repositories> section of the pom.xml file of your Maven project. Add kafka-oauth-client as a <dependency> to the pom.xml file of your Maven project. Build the Maven project to add the OAuth 2.0 dependency to the project.
[ "<repositories> <repository> <id>redhat-maven</id> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>", "<dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>3.9.0.redhat-00003</version> </dependency> </dependencies>", "<repositories> <repository> <id>redhat-maven</id> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>", "<dependencies> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-streams</artifactId> <version>3.9.0.redhat-00003</version> </dependency> </dependencies>", "<repositories> <repository> <id>redhat-maven</id> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/assembly-kafka-clients-maven-str
Chapter 51. DeploymentTemplate schema reference
Chapter 51. DeploymentTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate Full list of DeploymentTemplate schema properties Use deploymentStrategy to specify the strategy used to replace old pods with new ones when deployment configuration changes. Use one of the following values: RollingUpdate : Pods are restarted with zero downtime. Recreate : Pods are terminated before new ones are created. Using the Recreate deployment strategy has the advantage of not requiring spare resources, but the disadvantage is the application downtime. Example showing the deployment strategy set to Recreate . # ... template: deployment: deploymentStrategy: Recreate # ... This configuration change does not cause a rolling update. 51.1. DeploymentTemplate schema properties Property Description metadata Metadata applied to the resource. MetadataTemplate deploymentStrategy Pod replacement strategy for deployment configuration changes. Valid values are RollingUpdate and Recreate . Defaults to RollingUpdate . string (one of [RollingUpdate, Recreate])
[ "template: deployment: deploymentStrategy: Recreate" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-deploymenttemplate-reference
4. Recommended References
4. Recommended References For additional references about related topics, refer to the following table: Table 1. Recommended References Table Topic Reference Comment Shared Data Clustering and File Systems Shared Data Clusters by Dilip M. Ranade. Wiley, 2002. Provides detailed technical information on cluster file system and cluster volume-manager design. Storage Area Networks (SANs) Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs, Second Edition by Tom Clark. Addison-Wesley, 2003. Provides a concise summary of Fibre Channel and IP SAN Technology. Building SANs with Brocade Fabric Switches by C. Beauchamp, J. Judd, and B. Keo. Syngress, 2001. Best practices for building Fibre Channel SANs based on the Brocade family of switches, including core-edge topology for large SAN fabrics. Building Storage Networks, Second Edition by Marc Farley. Osborne/McGraw-Hill, 2001. Provides a comprehensive overview reference on storage networking technologies. Applications and High Availability Blueprints for High Availability: Designing Resilient Distributed Systems by E. Marcus and H. Stern. Wiley, 2000. Provides a summary of best practices in high availability.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-intro-references-GFS
6.2. Enabling Smart Card Login
6.2. Enabling Smart Card Login Smart card login for Red Hat Enterprise Linux servers and workstations is not enabled by default and must be enabled in the system settings. Note Using single sign-on when logging into Red Hat Enterprise Linux requires these packages: nss-tools esc pam_pkcs11 coolkey ccid gdm authconfig authconfig-gtk krb5-libs krb5-workstation krb5-auth-dialog krb5-pkinit-openssl Log into the system as root. Download the root CA certificates for the network in base 64 format, and install them on the server. The certificates are installed in the appropriate system database using the certutil command. For example: In the top menu, select the System menu, select Administration , and then click Authentication . Open the Advanced Options tab. Click the Enable Smart Card Support checkbox. When the button is active, click Configure smart card ... . There are two behaviors that can be configured for smart cards: The Require smart card for login checkbox requires smart cards and essentially disables Kerberos password authentication for logging into the system. Do not select this until after you have successfully logged in using a smart card. The Card removal action menu sets the response that the system takes if the smart card is removed during an active session. Ignore means that the system continues functioning as normal if the smart card is removed, while Lock immediately locks the screen. By default, the mechanisms to check whether a certificate has been revoked (Online Certificate Status Protocol, or OCSP, responses) are disabled. To validate whether a certificate has been revoked before its expiration period, enable OCSP checking by adding the ocsp_on option to the cert_policy directive. Open the pam_pkcs11.conf file. Change every cert_policy line so that it contains the ocsp_on option. Note Because of the way the file is parsed, there must be a space between cert_policy and the equals sign. Otherwise, parsing the parameter fails. If the smart card has not yet been enrolled (set up with personal certificates and keys), enroll the smart card, as described in Section 5.3, "Enrolling a Smart Card Automatically" . If the smart card is a CAC card, the PAM modules used for smart card login must be configured to recognize the specific CAC card. As root, create a file called /etc/pam_pkcs11/cn_map . Add the following entry to the cn_map file: MY.CAC_CN.123454 is the common name on the CAC card and login is the Red Hat Enterprise Linux login ID. Note When a smart card is inserted, the pklogin_finder tool (in debug mode) first maps the login ID to the certificates on the card and then attempts to output information about the validity of certificates. This is useful for diagnosing any problems with using the smart card to log into the system.
[ "certutil -A -d /etc/pki/nssdb -n \"root CA cert\" -t \"CT,C,C\" -i /tmp/ca_cert.crt", "vim /etc/pam_pkcs11/pam_pkcs11.conf", "cert_policy = ca, ocsp_on, signature;", "MY.CAC_CN.123454 -> login", "pklogin_finder debug" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/enabling-smart-card-login
Chapter 13. Process definitions and process instances in Business Central
Chapter 13. Process definitions and process instances in Business Central A process definition is a Business Process Model and Notation (BPMN) 2.0 file that serves as a container for a process and its BPMN diagram. The process definition shows all of the available information about the business process, such as any associated sub-processes or the number of users and groups that are participating in the selected definition. A process definition also defines the import entry for imported processes that the process definition uses, and the relationship entries. BPMN2 source of a process definition After you have created, configured, and deployed your project that includes your business processes, you can view the list of all the process definitions in Business Central Menu Manage Process Definitions . You can refresh the list of deployed process definitions at any time by clicking the refresh button in the upper-right corner. The process definition list shows all the available process definitions that are deployed into the platform. Click any of the process definitions listed to show the corresponding process definition details. This displays information about the process definition, such as if there is a sub-process associated with it, or how many users and groups exist in the process definition. The Diagram tab in the process definition details page contains the BPMN2-based diagram of the process definition. Within each selected process definition, you can start a new process instance for the process definition by clicking the New Process Instance button in the upper-right corner. Process instances that you start from the available process definitions are listed in Menu Manage Process Instances . You can also define the default pagination option for all users under the Manage drop-down menu ( Process Definition , Process Instances , Tasks , Jobs , and Execution Errors ) and in Menu Track Task Inbox . For more information about process and task administration in Business Central, see Managing and monitoring business processes in Business Central . 13.1. Starting a process instance from the process definitions page You can start a process instance in Menu Manage Process Definitions . This is useful for environments where you are working with several projects or process definitions at the same time. Prerequisites A project with a process definition has been deployed in Business Central. Procedure In Business Central, go to Menu Manage Process Definitions . Select the process definition for which you want to start a new process instance from the list. The details page of the definition opens. Click New Process Instance in the upper-right corner to start a new process instance. Provide any required information for the process instance. Click Submit to create the process instance. View the new process instance in Menu Manage Process Instances . 13.2. Starting a process instance from the process instances page You can create new process instances or view the list of all the running process instances in Menu Manage Process Instances . Prerequisites A project with a process definition has been deployed in Business Central. Procedure In Business Central, go to Menu Manage Process Instances . Click New Process Instance in the upper-right corner and select the process definition for which you want to start a new process instance from the drop-down list. Provide any information required to start a new process instance. Click Start to create the process instance. The new process instance appears in the Manage Process Instances list. 13.3. Process definitions in XML You can create processes directly in XML format using the BPMN 2.0 specifications. The syntax of these XML processes is defined using the BPMN 2.0 XML Schema Definition. A process XML file consists of the following core sections: process : This is the top part of the process XML that contains the definition of the different nodes and their properties. The process XML file consists of exactly one <process> element. This element contains parameters related to the process (its type, name, ID, and package name), and consists of three subsections: a header section where process-level information such as variables, globals, imports, and lanes are defined, a nodes section that defines each of the nodes in the process, and a connections section that contains the connections between all the nodes in the process. BPMNDiagram : This is the lower part of the process XML file that contains all graphical information, such as the location of the nodes. The nodes section contains a specific element for each node and defines the various parameters and any sub-elements for that node type. The following process XML file fragment shows a simple process that contains a sequence of a start event, a script task that prints "Hello World" to the console, and an end event: <?xml version="1.0" encoding="UTF-8"?> <definitions id="Definition" targetNamespace="http://www.jboss.org/drools" typeLanguage="http://www.java.com/javaTypes" expressionLanguage="http://www.mvel.org/2.0" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd" xmlns:g="http://www.jboss.org/drools/flow/gpd" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:dc="http://www.omg.org/spec/DD/20100524/DC" xmlns:di="http://www.omg.org/spec/DD/20100524/DI" xmlns:tns="http://www.jboss.org/drools"> <process processType="Private" isExecutable="true" id="com.sample.hello" name="Hello Process"> <!-- nodes --> <startEvent id="_1" name="Start" /> <scriptTask id="_2" name="Hello"> <script>System.out.println("Hello World");</script> </scriptTask> <endEvent id="_3" name="End" > <terminateEventDefinition/> </endEvent> <!-- connections --> <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" /> <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" /> </process> <bpmndi:BPMNDiagram> <bpmndi:BPMNPlane bpmnElement="com.sample.hello" > <bpmndi:BPMNShape bpmnElement="_1" > <dc:Bounds x="16" y="16" width="48" height="48" /> </bpmndi:BPMNShape> <bpmndi:BPMNShape bpmnElement="_2" > <dc:Bounds x="96" y="16" width="80" height="48" /> </bpmndi:BPMNShape> <bpmndi:BPMNShape bpmnElement="_3" > <dc:Bounds x="208" y="16" width="48" height="48" /> </bpmndi:BPMNShape> <bpmndi:BPMNEdge bpmnElement="_1-_2" > <di:waypoint x="40" y="40" /> <di:waypoint x="136" y="40" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge bpmnElement="_2-_3" > <di:waypoint x="136" y="40" /> <di:waypoint x="232" y="40" /> </bpmndi:BPMNEdge> </bpmndi:BPMNPlane> </bpmndi:BPMNDiagram> </definitions>
[ "<definitions id=\"Definition\" targetNamespace=\"http://www.jboss.org/drools\" typeLanguage=\"http://www.java.com/javaTypes\" expressionLanguage=\"http://www.mvel.org/2.0\" xmlns=\"http://www.omg.org/spec/BPMN/20100524/MODEL\"Rule Task xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd\" xmlns:g=\"http://www.jboss.org/drools/flow/gpd\" xmlns:bpmndi=\"http://www.omg.org/spec/BPMN/20100524/DI\" xmlns:dc=\"http://www.omg.org/spec/DD/20100524/DC\" xmlns:di=\"http://www.omg.org/spec/DD/20100524/DI\" xmlns:tns=\"http://www.jboss.org/drools\"> <process> PROCESS </process> <bpmndi:BPMNDiagram> BPMN DIAGRAM DEFINITION </bpmndi:BPMNDiagram> </definitions>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions id=\"Definition\" targetNamespace=\"http://www.jboss.org/drools\" typeLanguage=\"http://www.java.com/javaTypes\" expressionLanguage=\"http://www.mvel.org/2.0\" xmlns=\"http://www.omg.org/spec/BPMN/20100524/MODEL\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd\" xmlns:g=\"http://www.jboss.org/drools/flow/gpd\" xmlns:bpmndi=\"http://www.omg.org/spec/BPMN/20100524/DI\" xmlns:dc=\"http://www.omg.org/spec/DD/20100524/DC\" xmlns:di=\"http://www.omg.org/spec/DD/20100524/DI\" xmlns:tns=\"http://www.jboss.org/drools\"> <process processType=\"Private\" isExecutable=\"true\" id=\"com.sample.hello\" name=\"Hello Process\"> <!-- nodes --> <startEvent id=\"_1\" name=\"Start\" /> <scriptTask id=\"_2\" name=\"Hello\"> <script>System.out.println(\"Hello World\");</script> </scriptTask> <endEvent id=\"_3\" name=\"End\" > <terminateEventDefinition/> </endEvent> <!-- connections --> <sequenceFlow id=\"_1-_2\" sourceRef=\"_1\" targetRef=\"_2\" /> <sequenceFlow id=\"_2-_3\" sourceRef=\"_2\" targetRef=\"_3\" /> </process> <bpmndi:BPMNDiagram> <bpmndi:BPMNPlane bpmnElement=\"com.sample.hello\" > <bpmndi:BPMNShape bpmnElement=\"_1\" > <dc:Bounds x=\"16\" y=\"16\" width=\"48\" height=\"48\" /> </bpmndi:BPMNShape> <bpmndi:BPMNShape bpmnElement=\"_2\" > <dc:Bounds x=\"96\" y=\"16\" width=\"80\" height=\"48\" /> </bpmndi:BPMNShape> <bpmndi:BPMNShape bpmnElement=\"_3\" > <dc:Bounds x=\"208\" y=\"16\" width=\"48\" height=\"48\" /> </bpmndi:BPMNShape> <bpmndi:BPMNEdge bpmnElement=\"_1-_2\" > <di:waypoint x=\"40\" y=\"40\" /> <di:waypoint x=\"136\" y=\"40\" /> </bpmndi:BPMNEdge> <bpmndi:BPMNEdge bpmnElement=\"_2-_3\" > <di:waypoint x=\"136\" y=\"40\" /> <di:waypoint x=\"232\" y=\"40\" /> </bpmndi:BPMNEdge> </bpmndi:BPMNPlane> </bpmndi:BPMNDiagram> </definitions>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/process-definitions-and-instances-con-business-processes
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/pr01
Chapter 2. Starting and stopping Red Hat Satellite
Chapter 2. Starting and stopping Red Hat Satellite Satellite provides the satellite-maintain service command to manage Satellite services from the command line. This is useful when creating a backup of Satellite. For more information on creating backups, see Chapter 11, Backing up Satellite Server and Capsule Server . After installing Satellite with the satellite-installer command, all Satellite services are started and enabled automatically. View the list of these services by executing: To see the status of running services, execute: To stop Satellite services, execute: To start Satellite services, execute: To restart Satellite services, execute:
[ "satellite-maintain service list", "satellite-maintain service status", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain service restart" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/starting_and_stopping_server_admin
Chapter 48. limits
Chapter 48. limits This chapter describes the commands under the limits command. 48.1. limits show Show compute and block storage limits Usage: Table 48.1. Command arguments Value Summary -h, --help Show this help message and exit --absolute Show absolute limits --rate Show rate limits --reserved Include reservations count [only valid with --absolute] --project <project> Show limits for a specific project (name or id) [only valid with --absolute] --domain <domain> Domain the project belongs to (name or id) [only valid with --absolute] Table 48.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 48.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 48.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 48.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack limits show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] (--absolute | --rate) [--reserved] [--project <project>] [--domain <domain>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/limits
Chapter 2. Adding a User Storage Provider (LDAP/Kerberos) to Ansible Automation Platform Central Authentication
Chapter 2. Adding a User Storage Provider (LDAP/Kerberos) to Ansible Automation Platform Central Authentication Ansible Automation Platform Central Authentication comes with a built-in LDAP/AD provider. You can add your LDAP provider to central authentication to be able to import user attributes from your LDAP database. Prerequisites You are logged in as an SSO admin user. Procedure Log in to Ansible Automation Platform Central Authentication as an SSO admin user. From the navigation panel, select Configure section User Federation . Note When using an LDAP User Federation in RH-SSO, a group mapper must be added to the client configuration, ansible-automation-platform, to expose the identity provider (IDP) groups to the SAML authentication. Refer to OIDC Token and SAML Assertion Mappings for more information on SAML assertion mappers. From the Add provider list, select your LDAP provider to proceed to the LDAP configuration page. The following table lists the available options for your LDAP configuration: Configuration Option Description Storage mode Set to On if you want to import users into the central authentication user database. See Storage Mode for more information. Edit mode Determines the types of modifications that admins can make on user metadata. See Edit Mode for more information. Console Display Name Name used when this provider is referenced in the admin console Priority The priority of this provider when looking up users or adding a user Sync Registrations Enable if you want new users created by Ansible Automation Platform Central Authentication in the admin console or the registration page to be added to LDAP Allow Kerberos authentication Enable Kerberos/SPNEGO authentication in the realm with users data provisioned from LDAP. See Kerberos for more information.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/assembly-central-auth-add-user-storage
Chapter 2. The core Ceph components
Chapter 2. The core Ceph components A Red Hat Ceph Storage cluster can have a large number of Ceph nodes for limitless scalability, high availability and performance. Each node leverages non-proprietary hardware and intelligent Ceph daemons that communicate with each other to: Write and read data Compress data Ensure durability by replicating or erasure coding data Monitor and report on cluster health- also called 'heartbeating' Redistribute data dynamically- also called 'backfilling' Ensure data integrity; and, Recover from failures. To the Ceph client interface that reads and writes data, a Red Hat Ceph Storage cluster looks like a simple pool where it stores data. However, librados and the storage cluster perform many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph OSDs both use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm. The following sections provide details on how CRUSH enables Ceph to perform these operations seamlessly. Prerequisites A basic understanding of distributed storage systems. 2.1. Ceph pools The Ceph storage cluster stores data objects in logical partitions called 'Pools'. Ceph administrators can create pools for particular types of data, such as for block devices, object gateways, or simply just to separate one group of users from another. From the perspective of a Ceph client, the storage cluster is very simple. When a Ceph client reads or writes data using an I/O context, it always connects to a storage pool in the Ceph storage cluster. The client specifies the pool name, a user and a secret key, so the pool appears to act as a logical partition with access controls to its data objects. In actual fact, a Ceph pool is not only a logical partition for storing object data. A pool plays a critical role in how the Ceph storage cluster distributes and stores data. However, these complex operations are completely transparent to the Ceph client. Ceph pools define: Pool Type: In early versions of Ceph, a pool simply maintained multiple deep copies of an object. Today, Ceph can maintain multiple copies of an object, or it can use erasure coding to ensure durability. The data durability method is pool-wide, and does not change after creating the pool. The pool type defines the data durability method when creating the pool. Pool types are completely transparent to the client. Placement Groups: In an exabyte scale storage cluster, a Ceph pool might store millions of data objects or more. Ceph must handle many types of operations, including data durability via replicas or erasure code chunks, data integrity by scrubbing or CRC checks, replication, rebalancing and recovery. Consequently, managing data on a per-object basis presents a scalability and performance bottleneck. Ceph addresses this bottleneck by sharding a pool into placement groups. The CRUSH algorithm computes the placement group for storing an object and computes the Acting Set of OSDs for the placement group. CRUSH puts each object into a placement group. Then, CRUSH stores each placement group in a set of OSDs. System administrators set the placement group count when creating or modifying a pool. CRUSH Ruleset: CRUSH plays another important role: CRUSH can detect failure domains and performance domains. CRUSH can identify OSDs by storage media type and organize OSDs hierarchically into nodes, racks, and rows. CRUSH enables Ceph OSDs to store object copies across failure domains. For example, copies of an object may get stored in different server rooms, aisles, racks and nodes. If a large part of a cluster fails, such as a rack, the cluster can still operate in a degraded state until the cluster recovers. Additionally, CRUSH enables clients to write data to particular types of hardware, such as SSDs, hard drives with SSD journals, or hard drives with journals on the same drive as the data. The CRUSH ruleset determines failure domains and performance domains for the pool. Administrators set the CRUSH ruleset when creating a pool. Note An administrator CANNOT change a pool's ruleset after creating the pool. Durability : In exabyte scale storage clusters, hardware failure is an expectation and not an exception. When using data objects to represent larger-grained storage interfaces such as a block device, losing one or more data objects for that larger-grained interface can compromise the integrity of the larger-grained storage entity- potentially rendering it useless. So data loss is intolerable. Ceph provides high data durability in two ways: Replica pools store multiple deep copies of an object using the CRUSH failure domain to physically separate one data object copy from another. That is, copies get distributed to separate physical hardware. This increases durability during hardware failures. Erasure coded pools store each object as K+M chunks, where K represents data chunks and M represents coding chunks. The sum represents the number of OSDs used to store the object and the M value represents the number of OSDs that can fail and still restore data should the M number of OSDs fail. From the client perspective, Ceph is elegant and simple. The client simply reads from and writes to pools. However, pools play an important role in data durability, performance and high availability. 2.2. Ceph authentication To identify users and protect against man-in-the-middle attacks, Ceph provides its cephx authentication system, which authenticates users and daemons. Note The cephx protocol does not address data encryption for data transported over the network or data stored in OSDs. Cephx uses shared secret keys for authentication, meaning both the client and the monitor cluster have a copy of the client's secret key. The authentication protocol enables both parties to prove to each other that they have a copy of the key without actually revealing it. This provides mutual authentication, which means the cluster is sure the user possesses the secret key, and the user is sure that the cluster has a copy of the secret key. Cephx The cephx authentication protocol operates in a manner similar to Kerberos. A user/actor invokes a Ceph client to contact a monitor. Unlike Kerberos, each monitor can authenticate users and distribute keys, so there is no single point of failure or bottleneck when using cephx . The monitor returns an authentication data structure similar to a Kerberos ticket that contains a session key for use in obtaining Ceph services. This session key is itself encrypted with the user's permanent secret key, so that only the user can request services from the Ceph monitors. The client then uses the session key to request its desired services from the monitor, and the monitor provides the client with a ticket that will authenticate the client to the OSDs that actually handle data. Ceph monitors and OSDs share a secret, so the client can use the ticket provided by the monitor with any OSD or metadata server in the cluster. Like Kerberos, cephx tickets expire, so an attacker cannot use an expired ticket or session key obtained surreptitiously. This form of authentication will prevent attackers with access to the communications medium from either creating bogus messages under another user's identity or altering another user's legitimate messages, as long as the user's secret key is not divulged before it expires. To use cephx , an administrator must set up users first. In the following diagram, the client.admin user invokes ceph auth get-or-create-key from the command line to generate a username and secret key. Ceph's auth subsystem generates the username and key, stores a copy with the monitor(s) and transmits the user's secret back to the client.admin user. This means that the client and the monitor share a secret key. Note The client.admin user must provide the user ID and secret key to the user in a secure manner. 2.3. Ceph placement groups Storing millions of objects in a cluster and managing them individually is resource intensive. So Ceph uses placement groups (PGs) to make managing a huge number of objects more efficient. A PG is a subset of a pool that serves to contain a collection of objects. Ceph shards a pool into a series of PGs. Then, the CRUSH algorithm takes the cluster map and the status of the cluster into account and distributes the PGs evenly and pseudo-randomly to OSDs in the cluster. Here is how it works. When a system administrator creates a pool, CRUSH creates a user-defined number of PGs for the pool. Generally, the number of PGs should be a reasonably fine-grained subset of the data. For example, 100 PGs per OSD per pool would mean that each PG contains approximately 1% of the pool's data. The number of PGs has a performance impact when Ceph needs to move a PG from one OSD to another OSD. If the pool has too few PGs, Ceph will move a large percentage of the data simultaneously and the network load will adversely impact the cluster's performance. If the pool has too many PGs, Ceph will use too much CPU and RAM when moving tiny percentages of the data and thereby adversely impact the cluster's performance. For details on calculating the number of PGs to achieve optimal performance, see Placement group count . Ceph ensures against data loss by storing replicas of an object or by storing erasure code chunks of an object. Since Ceph stores objects or erasure code chunks of an object within PGs, Ceph replicates each PG in a set of OSDs called the "Acting Set" for each copy of an object or each erasure code chunk of an object. A system administrator can determine the number of PGs in a pool and the number of replicas or erasure code chunks. However, the CRUSH algorithm calculates which OSDs are in the acting set for a particular PG. The CRUSH algorithm and PGs make Ceph dynamic. Changes in the cluster map or the cluster state may result in Ceph moving PGs from one OSD to another automatically. Here are a few examples: Expanding the Cluster: When adding a new host and its OSDs to the cluster, the cluster map changes. Since CRUSH evenly and pseudo-randomly distributes PGs to OSDs throughout the cluster, adding a new host and its OSDs means that CRUSH will reassign some of the pool's placement groups to those new OSDs. That means that system administrators do not have to rebalance the cluster manually. Also, it means that the new OSDs contain approximately the same amount of data as the other OSDs. This also means that new OSDs do not contain newly written OSDs, preventing "hot spots" in the cluster. An OSD Fails: When an OSD fails, the state of the cluster changes. Ceph temporarily loses one of the replicas or erasure code chunks, and needs to make another copy. If the primary OSD in the acting set fails, the OSD in the acting set becomes the primary and CRUSH calculates a new OSD to store the additional copy or erasure code chunk. By managing millions of objects within the context of hundreds to thousands of PGs, the Ceph storage cluster can grow, shrink and recover from failure efficiently. For Ceph clients, the CRUSH algorithm via librados makes the process of reading and writing objects very simple. A Ceph client simply writes an object to a pool or reads an object from a pool. The primary OSD in the acting set can write replicas of the object or erasure code chunks of the object to the secondary OSDs in the acting set on behalf of the Ceph client. If the cluster map or cluster state changes, the CRUSH computation for which OSDs store the PG will change too. For example, a Ceph client may write object foo to the pool bar . CRUSH will assign the object to PG 1.a , and store it on OSD 5 , which makes replicas on OSD 10 and OSD 15 respectively. If OSD 5 fails, the cluster state changes. When the Ceph client reads object foo from pool bar , the client via librados will automatically retrieve it from OSD 10 as the new primary OSD dynamically. The Ceph client via librados connects directly to the primary OSD within an acting set when writing and reading objects. Since I/O operations do not use a centralized broker, network oversubscription is typically NOT an issue with Ceph. The following diagram depicts how CRUSH assigns objects to PGs, and PGs to OSDs. The CRUSH algorithm assigns the PGs to OSDs such that each OSD in the acting set is in a separate failure domain, which typically means the OSDs will always be on separate server hosts and sometimes in separate racks. 2.4. Ceph CRUSH ruleset Ceph assigns a CRUSH ruleset to a pool. When a Ceph client stores or retrieves data in a pool, Ceph identifies the CRUSH ruleset, a rule within the rule set, and the top-level bucket in the rule for storing and retrieving data. As Ceph processes the CRUSH rule, it identifies the primary OSD that contains the placement group for an object. That enables the client to connect directly to the OSD, access the placement group and read or write object data. To map placement groups to OSDs, a CRUSH map defines a hierarchical list of bucket types. The list of bucket types are located under types in the generated CRUSH map. The purpose of creating a bucket hierarchy is to segregate the leaf nodes by their failure domains and/or performance domains, such as drive type, hosts, chassis, racks, power distribution units, pods, rows, rooms, and data centers. With the exception of the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary. Administrators may define it according to their own needs if the default types don't suit their requirements. CRUSH supports a directed acyclic graph that models the Ceph OSD nodes, typically in a hierarchy. So Ceph administrators can support multiple hierarchies with multiple root nodes in a single CRUSH map. For example, an administrator can create a hierarchy representing higher cost SSDs for high performance, and a separate hierarchy of lower cost hard drives with SSD journals for moderate performance. 2.5. Ceph input/output operations Ceph clients retrieve a 'Cluster Map' from a Ceph monitor, bind to a pool, and perform input/output (I/O) on objects within placement groups in the pool. The pool's CRUSH ruleset and the number of placement groups are the main factors that determine how Ceph will place the data. With the latest version of the cluster map, the client knows about all of the monitors and OSDs in the cluster and their current state. However, the client doesn't know anything about object locations. The only inputs required by the client are the object ID and the pool name. It is simple: Ceph stores data in named pools. When a client wants to store a named object in a pool it takes the object name, a hash code, the number of PGs in the pool and the pool name as inputs; then, CRUSH (Controlled Replication Under Scalable Hashing) calculates the ID of the placement group and the primary OSD for the placement group. Ceph clients use the following steps to compute PG IDs. The client inputs the pool ID and the object ID. For example, pool = liverpool and object-id = john . CRUSH takes the object ID and hashes it. CRUSH calculates the hash modulo of the number of PGs to get a PG ID. For example, 58 . CRUSH calculates the primary OSD corresponding to the PG ID. The client gets the pool ID given the pool name. For example, the pool liverpool is pool number 4 . The client prepends the pool ID to the PG ID. For example, 4.58 . The client performs an object operation such as write, read, or delete by communicating directly with the Primary OSD in the Acting Set. The topology and state of the Ceph storage cluster are relatively stable during a session. Empowering a Ceph client via librados to compute object locations is much faster than requiring the client to make a query to the storage cluster over a chatty session for each read/write operation. The CRUSH algorithm allows a client to compute where objects should be stored, and enables the client to contact the primary OSD in the acting set directly to store or retrieve data in the objects. Since a cluster at the exabyte scale has thousands of OSDs, network oversubscription between a client and a Ceph OSD is not a significant problem. If the cluster state changes, the client can simply request an update to the cluster map from the Ceph monitor. 2.6. Ceph replication Like Ceph clients, Ceph OSDs can contact Ceph monitors to retrieve the latest copy of the cluster map. Ceph OSDs also use the CRUSH algorithm, but they use it to compute where to store replicas of objects. In a typical write scenario, a Ceph client uses the CRUSH algorithm to compute the placement group ID and the primary OSD in the Acting Set for an object. When the client writes the object to the primary OSD, the primary OSD finds the number of replicas that it should store. The value is found in the osd_pool_default_size setting. Then, the primary OSD takes the object ID, pool name and the cluster map and uses the CRUSH algorithm to calculate the IDs of secondary OSDs for the acting set. The primary OSD writes the object to the secondary OSDs. When the primary OSD receives an acknowledgment from the secondary OSDs and the primary OSD itself completes its write operation, it acknowledges a successful write operation to the Ceph client. With the ability to perform data replication on behalf of Ceph clients, Ceph OSD Daemons relieve Ceph clients from that duty, while ensuring high data availability and data safety. Note The primary OSD and the secondary OSDs are typically configured to be in separate failure domains. CRUSH computes the IDs of the secondary OSDs with consideration for the failure domains. Data copies In a replicated storage pool, Ceph needs multiple copies of an object to operate in a degraded state. Ideally, a Ceph storage cluster enables a client to read and write data even if one of the OSDs in an acting set fails. For this reason, Ceph defaults to making three copies of an object with a minimum of two copies clean for write operations. Ceph will still preserve data even if two OSDs fail. However, it will interrupt write operations. In an erasure-coded pool, Ceph needs to store chunks of an object across multiple OSDs so that it can operate in a degraded state. Similar to replicated pools, ideally an erasure-coded pool enables a Ceph client to read and write in a degraded state. Important Red Hat supports the following jerasure coding values for k , and m : k=8 m=3 k=8 m=4 k=4 m=2 2.7. Ceph erasure coding Ceph can load one of many erasure code algorithms. The earliest and most commonly used is the Reed-Solomon algorithm. An erasure code is actually a forward error correction (FEC) code. FEC code transforms a message of K chunks into a longer message called a 'code word' of N chunks, such that Ceph can recover the original message from a subset of the N chunks. More specifically, N = K+M where the variable K is the original amount of data chunks. The variable M stands for the extra or redundant chunks that the erasure code algorithm adds to provide protection from failures. The variable N is the total number of chunks created after the erasure coding process. The value of M is simply N-K which means that the algorithm computes N-K redundant chunks from K original data chunks. This approach guarantees that Ceph can access all the original data. The system is resilient to arbitrary N-K failures. For instance, in a 10 K of 16 N configuration, or erasure coding 10/16 , the erasure code algorithm adds six extra chunks to the 10 base chunks K . For example, in a M = K-N or 16-10 = 6 configuration, Ceph will spread the 16 chunks N across 16 OSDs. The original file could be reconstructed from the 10 verified N chunks even if 6 OSDs fail- ensuring that the Red Hat Ceph Storage cluster will not lose data, and thereby ensures a very high level of fault tolerance. Like replicated pools, in an erasure-coded pool the primary OSD in the up set receives all write operations. In replicated pools, Ceph makes a deep copy of each object in the placement group on the secondary OSDs in the set. For erasure coding, the process is a bit different. An erasure coded pool stores each object as K+M chunks. It is divided into K data chunks and M coding chunks. The pool is configured to have a size of K+M so that Ceph stores each chunk in an OSD in the acting set. Ceph stores the rank of the chunk as an attribute of the object. The primary OSD is responsible for encoding the payload into K+M chunks and sends them to the other OSDs. The primary OSD is also responsible for maintaining an authoritative version of the placement group logs. For example, in a typical configuration a system administrator creates an erasure coded pool to use six OSDs and sustain the loss of two of them. That is, ( K+M = 6 ) such that ( M = 2 ). When Ceph writes the object NYAN containing ABCDEFGHIJKL to the pool, the erasure encoding algorithm splits the content into four data chunks by simply dividing the content into four parts: ABC , DEF , GHI , and JKL . The algorithm will pad the content if the content length is not a multiple of K . The function also creates two coding chunks: the fifth with YXY and the sixth with QGC . Ceph stores each chunk on an OSD in the acting set, where it stores the chunks in objects that have the same name, NYAN , but reside on different OSDs. The algorithm must preserve the order in which it created the chunks as an attribute of the object shard_t , in addition to its name. For example, Chunk 1 contains ABC and Ceph stores it on OSD5 while chunk 5 contains YXY and Ceph stores it on OSD4 . In a recovery scenario, the client attempts to read the object NYAN from the erasure-coded pool by reading chunks 1 through 6. The OSD informs the algorithm that chunks 2 and 6 are missing. These missing chunks are called 'erasures'. For example, the primary OSD could not read chunk 6 because the OSD6 is out, and could not read chunk 2, because OSD2 was the slowest and its chunk was not taken into account. However, as soon as the algorithm has four chunks, it reads the four chunks: chunk 1 containing ABC , chunk 3 containing GHI , chunk 4 containing JKL , and chunk 5 containing YXY . Then, it rebuilds the original content of the object ABCDEFGHIJKL , and original content of chunk 6, which contained QGC . Splitting data into chunks is independent from object placement. The CRUSH ruleset along with the erasure-coded pool profile determines the placement of chunks on the OSDs. For instance, using the Locally Repairable Code ( lrc ) plugin in the erasure code profile creates additional chunks and requires fewer OSDs to recover from. For example, in an lrc profile configuration K=4 M=2 L=3 , the algorithm creates six chunks ( K+M ), just as the jerasure plugin would, but the locality value ( L=3 ) requires that the algorithm create 2 more chunks locally. The algorithm creates the additional chunks as such, (K+M)/L . If the OSD containing chunk 0 fails, this chunk can be recovered by using chunks 1, 2 and the first local chunk. In this case, the algorithm only requires 3 chunks for recovery instead of 5. Note Using erasure-coded pools disables Object Map. Additional Resources For more information about CRUSH, the erasure-coding profiles, and plugins, see the Storage Strategies Guide for Red Hat Ceph Storage 6. For more details on Object Map, see the Ceph client object map section. 2.8. Ceph ObjectStore ObjectStore provides a low-level interface to an OSD's raw block device. When a client reads or writes data, it interacts with the ObjectStore interface. Ceph write operations are essentially ACID transactions: that is, they provide Atomicity , Consistency , Isolation and Durability . ObjectStore ensures that a Transaction is all-or-nothing to provide Atomicity . The ObjectStore also handles object semantics. An object stored in the storage cluster has a unique identifier, object data and metadata. So ObjectStore provides Consistency by ensuring that Ceph object semantics are correct. ObjectStore also provides the Isolation portion of an ACID transaction by invoking a Sequencer on write operations to ensure that Ceph write operations occur sequentially. In contrast, an OSDs replication or erasure coding functionality provides the Durability component of the ACID transaction. Since ObjectStore is a low-level interface to storage media, it also provides performance statistics. Ceph implements several concrete methods for storing data: BlueStore: A production grade implementation using a raw block device to store object data. Memstore: A developer implementation for testing read/write operations directly in RAM. K/V Store: An internal implementation for Ceph's use of key/value databases. Since administrators will generally only address BlueStore , the following sections will only describe those implementations in greater detail. 2.9. Ceph BlueStore BlueStore is the current storage implementation for Ceph. It uses the very light weight BlueFS file system on a small partition for its k/v databases and eliminates the paradigm of a directory representing a placement group, a file representing an object and file XATTRs representing metadata. BlueStore stores data as: Object Data: In BlueStore , Ceph stores objects as blocks directly on a raw block device. The portion of the raw block device that stores object data does NOT contain a filesystem. The omission of the filesystem eliminates a layer of indirection and thereby improves performance. However, much of the BlueStore performance improvement comes from the block database and write-ahead log. Block Database: In BlueStore , the block database handles the object semantics to guarantee Consistency . An object's unique identifier is a key in the block database. The values in the block database consist of a series of block addresses that refer to the stored object data, the object's placement group, and object metadata. The block database might reside on a BlueFS partition on the same raw block device that stores the object data, or it may reside on a separate block device, usually when the primary block device is a hard disk drive and an SSD or NVMe will improve performance. The key/value semantics of BlueStore do not suffer from the limitations of filesystem XATTRs. BlueStore might assign objects to other placement groups quickly within the block database without the overhead of moving files from one directory to another. The block database can store the checksum of the stored object data and its metadata, allowing full data checksum operations for each read, which is more efficient than periodic scrubbing to detect bit rot. BlueStore can compress an object and the block database can store the algorithm used to compress an object- ensuring that read operations select the appropriate algorithm for decompression. Write-ahead Log: In BlueStore , the write-ahead log ensures Atomicity and it logs all aspects of each transaction. The BlueStore write-ahead log or WAL can perform this function simultaneously. BlueStore can deploy the WAL on the same device for storing object data, or it may deploy the WAL on another device, usually when the primary block device is a hard disk drive and an SSD or NVMe will improve performance. Note It is only helpful to store a block database or a write-ahead log on a separate block device if the separate device is faster than the primary storage device. For example, SSD and NVMe devices are generally faster than HDDs. Placing the block database and the WAL on separate devices may also have performance benefits due to differences in their workloads. 2.10. Ceph self management operations Ceph clusters perform a lot of self monitoring and management operations automatically. For example, Ceph OSDs can check the cluster health and report back to the Ceph monitors. By using CRUSH to assign objects to placement groups and placement groups to a set of OSDs, Ceph OSDs can use the CRUSH algorithm to rebalance the cluster or recover from OSD failures dynamically. 2.11. Ceph heartbeat Ceph OSDs join a cluster and report to Ceph Monitors on their status. At the lowest level, the Ceph OSD status is up or down reflecting whether or not it is running and able to service Ceph client requests. If a Ceph OSD is down and in the Ceph storage cluster, this status may indicate the failure of the Ceph OSD. If a Ceph OSD is not running for example, it crashes- the Ceph OSD cannot notify the Ceph Monitor that it is down . The Ceph Monitor can ping a Ceph OSD daemon periodically to ensure that it is running. However, heartbeating also empowers Ceph OSDs to determine if a neighboring OSD is down , to update the cluster map and to report it to the Ceph Monitors. This means that Ceph Monitors can remain light weight processes. 2.12. Ceph peering Ceph stores copies of placement groups on multiple OSDs. Each copy of a placement group has a status. These OSDs "peer" check each other to ensure that they agree on the status of each copy of the PG. Peering issues usually resolve themselves. Note When Ceph monitors agree on the state of the OSDs storing a placement group, that does not mean that the placement group has the latest contents. When Ceph stores a placement group in an acting set of OSDs, refer to them as Primary , Secondary , and so forth. By convention, the Primary is the first OSD in the Acting Set . The Primary that stores the first copy of a placement group is responsible for coordinating the peering process for that placement group. The Primary is the ONLY OSD that will accept client-initiated writes to objects for a given placement group where it acts as the Primary . An Acting Set is a series of OSDs that are responsible for storing a placement group. An Acting Set may refer to the Ceph OSD Daemons that are currently responsible for the placement group, or the Ceph OSD Daemons that were responsible for a particular placement group as of some epoch. The Ceph OSD daemons that are part of an Acting Set may not always be up . When an OSD in the Acting Set is up , it is part of the Up Set . The Up Set is an important distinction, because Ceph can remap PGs to other Ceph OSDs when an OSD fails. Note In an Acting Set for a PG containing osd.25 , osd.32 and osd.61 , the first OSD, osd.25 , is the Primary . If that OSD fails, the Secondary , osd.32 , becomes the Primary , and Ceph will remove osd.25 from the Up Set . 2.13. Ceph rebalancing and recovery When an administrator adds a Ceph OSD to a Ceph storage cluster, Ceph updates the cluster map. This change to the cluster map also changes object placement, because the modified cluster map changes an input for the CRUSH calculations. CRUSH places data evenly, but pseudo randomly. So only a small amount of data moves when an administrator adds a new OSD. The amount of data is usually the number of new OSDs divided by the total amount of data in the cluster. For example, in a cluster with 50 OSDs, 1/50th or 2% of the data might move when adding an OSD. The following diagram depicts the rebalancing process where some, but not all of the PGs migrate from existing OSDs, OSD 1 and 2 in the diagram, to the new OSD, OSD 3, in the diagram. Even when rebalancing, CRUSH is stable. Many of the placement groups remain in their original configuration, and each OSD gets some added capacity, so there are no load spikes on the new OSD after the cluster rebalances. 2.14. Ceph data integrity As part of maintaining data integrity, Ceph provides numerous mechanisms to guard against bad disk sectors and bit rot. Scrubbing: Ceph OSD Daemons can scrub objects within placement groups. That is, Ceph OSD Daemons can compare object metadata in one placement group with its replicas in placement groups stored on other OSDs. Scrubbing- usually performed daily- catches bugs or storage errors. Ceph OSD Daemons also perform deeper scrubbing by comparing data in objects bit-for-bit. Deep scrubbing- usually performed weekly- finds bad sectors on a drive that weren't apparent in a light scrub. CRC Checks: In Red Hat Ceph Storage 6 when using BlueStore , Ceph can ensure data integrity by conducting a cyclical redundancy check (CRC) on write operations; then, store the CRC value in the block database. On read operations, Ceph can retrieve the CRC value from the block database and compare it with the generated CRC of the retrieved data to ensure data integrity instantly. 2.15. Ceph high availability In addition to the high scalability enabled by the CRUSH algorithm, Ceph must also maintain high availability. This means that Ceph clients must be able to read and write data even when the cluster is in a degraded state, or when a monitor fails. 2.16. Clustering the Ceph Monitor Before Ceph clients can read or write data, they must contact a Ceph Monitor to obtain the most recent copy of the cluster map. A Red Hat Ceph Storage cluster can operate with a single monitor; however, this introduces a single point of failure. That is, if the monitor goes down, Ceph clients cannot read or write data. For added reliability and fault tolerance, Ceph supports a cluster of monitors. In a cluster of Ceph Monitors, latency and other faults can cause one or more monitors to fall behind the current state of the cluster. For this reason, Ceph must have agreement among various monitor instances regarding the state of the storage cluster. Ceph always uses a majority of monitors and the Paxos algorithm to establish a consensus among the monitors about the current state of the storage cluster. Ceph Monitors nodes require NTP to prevent clock drift. Storage administrators usually deploy Ceph with an odd number of monitors so determining a majority is efficient. For example, a majority may be 1, 2:3, 3:5, 4:6, and so forth.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/architecture_guide/the-core-ceph-components
Chapter 77. JaegerTracing schema reference
Chapter 77. JaegerTracing schema reference The type JaegerTracing has been deprecated. Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the JaegerTracing type from OpenTelemetryTracing . It must have the value jaeger for the type JaegerTracing . Property Description type Must be jaeger . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JaegerTracing-reference
Chapter 8. Querying metrics
Chapter 8. Querying metrics You can query metrics to view data about how cluster components and your own workloads are performing. 8.1. About querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator , you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer , you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 8.1.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Metrics . To add one or more queries, perform any of the following actions: Option Description Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions are displayed in a list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Click Add query . Duplicate an existing query. Click the Options menu to the query and select Duplicate query . Delete a query. Click the Options menu to the query and select Delete query . Disable a query from being run. Click the Options menu to the query and select Disable query . To run queries that you created, click Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, click Hide graph and calibrate your query by using the metrics table. After finding a feasible query, enable the plot to draw the graphs. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Additional resources For more information about creating PromQL queries, see the Prometheus query documentation . 8.1.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time in the Observe --> Metrics page in the web console for your user-defined project. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure Select the Developer perspective in the OpenShift Container Platform web console. Select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL . Optional: Select Custom query from the Select query list to enter a new query. As you type, autocomplete suggestions appear in a drop-down list. These suggestions include functions and metrics. Click a suggested item to select it. Note In the Developer perspective, you can only run one query at a time. Additional resources For more information about creating PromQL queries, see the Prometheus query documentation . 8.1.3. Exploring the visualized metrics After running the queries, the metrics are displayed on an interactive plot. The X-axis in the plot represents time and the Y-axis represents metrics values. Each metric is shown as a colored line on the graph. You can manipulate the plot interactively and explore the metrics. Procedure In the Administrator perspective: Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown. Note By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query. To hide all metrics from a query, click for the query and click Hide all series . To hide a specific metric, go to the query table and click the colored square near the metric name. To zoom into the plot and change the time range, do one of the following: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. To reset the time range, select Reset zoom . To display outputs for all queries at a specific point in time, hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. To hide the plot, select Hide graph . In the Developer perspective: To zoom into the plot and change the time range, do one of the following: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. To reset the time range, select Reset zoom . To display outputs for all queries at a specific point in time, hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. Additional resources See Querying metrics for details on using the PromQL interface See Querying metrics for all projects as an administrator for details on accessing metrics for all projects as an administrator. See Querying metrics for user-defined projects as a developer for details on accessing non-cluster metrics as a developer or a privileged user.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/querying-metrics
Chapter 2. Active-Active Disaster Recovery
Chapter 2. Active-Active Disaster Recovery 2.1. Active-Active Overview The active-active disaster recovery failover configuration can span two sites. Both sites are active, and if the primary site becomes unavailable, the Red Hat Virtualization environment continues to operate in the secondary site to ensure business continuity. The active-active failover configuration includes a stretch cluster in which hosts capable of running the virtual machines are located in both the primary and secondary sites. All the hosts belong to the same Red Hat Virtualization cluster. This configuration requires replicated storage that is writeable on both sites so virtual machines can migrate between the two sites and continue running on both sites' storage. Figure 2.1. Stretch Cluster Configuration Virtual machines migrate to the secondary site if the primary site becomes unavailable. The virtual machines automatically failback to the primary site when the site becomes available and the storage is replicated in both sites. Figure 2.2. Failed Over Stretch Cluster Important To ensure virtual machine failover and failback works: Virtual machines must be configured to be highly available, and each virtual machine must have a lease on a target storage domain to ensure the virtual machine can start even without power management. Soft enforced virtual machine to host affinity must be configured to ensure the virtual machines only start on the selected hosts. For more information see Improving Uptime with Virtual Machine High Availability and Affinity Groups in the Virtual Machine Management Guide . The stretched cluster configuration can be implemented using a self-hosted engine environment, or a standalone Manager environment. For more information about the different types of deployments see Red Hat Virtualization Architecture in the Product Guide . 2.2. Network Considerations All hosts in the cluster must be on the same broadcast domain over an L2 network. So connectivity between the two sites must be L2. The maximum latency requirements between the sites across the L2 network differ for the two setups. The standalone Manager environment requires a maximum latency of 100ms, while the self-hosted engine environment requires a maximum latency of 7ms. 2.3. Storage Considerations The storage domain for Red Hat Virtualization can comprise either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems). For more information about Red Hat Virtualization storage, see Storage in the Administration Guide . Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. The sites require synchronously replicated storage that is writeable on both sites with shared layer 2 (L2) network connectivity. The replicated storage is required to allow virtual machines to migrate between sites and continue running on the site's storage. All storage replication options supported by Red Hat Enterprise Linux 7 and later can be used in the stretch cluster. Important If you have a custom multipath configuration that is recommended by the storage vendor, see the instructions and important limitations in Customizing Multipath Configurations for SAN Vendors . Set the SPM role on a host at the primary site to have precedence. To do so, configure SPM priority as high in the primary site hosts and SPM priority as low on secondary site hosts. If you have a primary site failure that impacts network devices inside the primary site, preventing the fencing device for the SPM host from being reachable, such as power loss, the hosts in the seconday site are not able to take over the SPM role. In such a scenario virtual machines do a failover, but operations that require the SPM role in place cannot be executed, including adding new disks, extending existing disks, and exporting virtual machines. To restore full functionality, detect the actual nature of the disaster and after fixing the root cause and rebooting the SPM host, select Confirm 'Host has been Rebooted' for the SPM host. Additional resources Manually Fencing or Isolating a Non-Responsive Host in the Administration Guide . 2.4. Configuring a Self-hosted Engine Stretch Cluster Environment This procedure provides instructions to configure a stretch cluster using a self-hosted engine deployment. Prerequisites A writable storage server in both sites with L2 network connectivity. Real-time storage replication service to duplicate the storage. Limitations Maximum 7ms latency between sites. Configuring the Self-hosted Engine Stretch Cluster Deploy the self-hosted engine. See Installing Red Hat Virtualization as a self-hosted engine using the command line . Install additional self-hosted engine nodes in each site and add them to your cluster. See Adding Self-hosted Engine Nodes to the Red Hat Virtualization Manager in Installing Red Hat Virtualization as a self-hosted engine using the command line . Optionally, install additional standard hosts. See Adding Standard Hosts to the Red Hat Virtualization Manager in Installing Red Hat Virtualization as a self-hosted engine using the command line . Configure the SPM priority to be higher on all hosts in the primary site to ensure SPM failover to the secondary site occurs only when all hosts in the primary site are unavailable. See SPM Priority in the Administration Guide . Configure all virtual machines that must failover as highly available, and ensure that the virtual machine has a lease on the target storage domain. See Configuring a Highly Available Virtual Machine in the Virtual Machine Management Guide . Configure virtual machine to host soft affinity and define the behavior you expect from the affinity group. See Affinity Groups in the Virtual Machine Management Guide and Scheduling Policies in the Administration Guide . The active-active failover can be manually performed by placing the main site's hosts into maintenance mode. 2.5. Configuring a Standalone Manager Stretch Cluster Environment This procedure provides instructions to configure a stretch cluster using a standalone Manager deployment. Prerequisites A writable storage server in both sites with L2 network connectivity. Real-time storage replication service to duplicate the storage. Limitations Maximum 100ms latency between sites. Important The Manager must be highly available for virtual machines to failover and failback between sites. If the Manager goes down with the site, the virtual machines will not failover. The standalone Manager is only highly available when managed externally. For example: Using Red Hat's High Availability Add-On. As a highly available virtual machine in a separate virtualization environment. Using Red Hat Enterprise Linux Cluster Suite. In a public cloud. Procedure Install and configure the Red Hat Virtualization Manager. See Installing Red Hat Virtualization as a standalone Manager with local databases . Install the hosts in each site and add them to the cluster. See Installing Hosts for Red Hat Virtualization in Installing Red Hat Virtualization as a standalone Manager with local databases . Configure the SPM priority to be higher on all hosts in the primary site to ensure SPM failover to the secondary site occurs only when all hosts in the primary site are unavailable. See SPM Priority in the Administration Guide . Configure all virtual machines that must failover as highly available, and ensure that the virtual machine has a lease on the target storage domain. See Configuring a Highly Available Virtual Machine in the Virtual Machine Management Guide . Configure virtual machine to host soft affinity and define the behavior you expect from the affinity group. See Affinity Groups in the Virtual Machine Management Guide and Scheduling Policies in the Administration Guide . The active-active failover can be manually performed by placing the main site's hosts into maintenance mode.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/disaster_recovery_guide/active_active
Chapter 7. Running a Red Hat build of Kogito microservice
Chapter 7. Running a Red Hat build of Kogito microservice After you design the business decisions for your Red Hat build of Kogito microservice, you can run your Red Hat build of Quarkus or Spring Boot application in one of the following modes: Development mode : For local testing. On Red Hat build of Quarkus, development mode also offers live reload of your decisions in your running applications for advanced debugging. JVM mode : For compatibility with a Java virtual machine (JVM). Procedure In a command terminal, navigate to the project that contains your Red Hat build of Kogito microservice and enter one of the following commands, depending on your preferred run mode and application environment: For development mode: On Red Hat build of Quarkus On Sprint Boot For JVM mode: On Red Hat build of Quarkus and Spring Boot
[ "mvn clean compile quarkus:dev", "mvn clean compile spring-boot:run", "mvn clean package java -jar target/sample-kogito-1.0-SNAPSHOT-runner.jar" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/proc-kogito-microservice-running-app_getting-started-kogito-microservices
4.309. sudo
4.309. sudo 4.309.1. RHBA-2011:1175 - sudo bug fix and enhancement update An updated sudo package that fixes one bug and introduces one feature enhancement is now available for Red Hat Enterprise Linux 6. The sudo utility allows system administrators to give certain users the ability to run commands as root with logging. Bug Fix BZ# 709235 Prior to this update, sudo incorrectly searched for the Lightweight Directory Access Protocol (LDAP) configuration in the /etc/nss_ldap.conf file. This bug has been fixed in this update so that sudo now searches the /etc/nslcd.conf file. Enhancement BZ# 709859 Because the sudo utility needs to be run with elevated privileges, the sudo package is now built with RELRO linker flags. All users of sudo are advised to upgrade to this updated package, which fixes this bug and adds this enhancement. 4.309.2. RHBA-2012:0565 - sudo bug fix update Updated sudo packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The sudo (superuser do) utility allows system administrators to give certain users the ability to run commands as root. Bug Fixes BZ# 802440 A race condition in the signal handling code caused the sudo process to become unresponsive after receiving the SIGCHLD signal. This update modifies the signal handling to prevent the race condition, which ensures that the sudo process no longer hangs under these circumstances. BZ# 811879 The "-l" option is used to list allowed and forbidden commands for the invoking user or for the user specified by the "-U" option. However, previously, the getgrouplist() function incorrectly checked the invoker's group membership instead of the membership of the specified user. Consequently, using the "sudo" command with both the "-l" and "-U" options listed privileges granted to any group the invoker was a member of. The getgrouplist() function has been fixed to properly check the group membership of the intended user rather than checking the invoker's membership. This ensures that the required output is listed when using the "-l" and "-U options. All users of sudo are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/sudo
1.4. Red Hat Documentation Site
1.4. Red Hat Documentation Site Red Hat's official documentation site is available at https://access.redhat.com/site/documentation/ . There you will find the latest version of every book, including this one.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/red_hat_documentation_site
Chapter 21. KafkaAuthorizationSimple schema reference
Chapter 21. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties Configures the Kafka custom resource to use simple authorization and define Access Control Lists (ACLs). ACLs allow you to define which users have access to which resources at a granular level. Streams for Apache Kafka uses Kafka's built-in authorization plugins as follows: StandardAuthorizer for Kafka in KRaft mode AclAuthorizer for ZooKeeper-based Kafka Set the type property in the authorization section to the value simple , and configure a list of super users. Super users are always allowed without querying ACL rules. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . Example simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. 21.1. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value simple for the type KafkaAuthorizationSimple . Property Property type Description type string Must be simple . superUsers string array List of super users. Should contain list of user principals which should get unlimited access rights.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaAuthorizationSimple-reference
Chapter 3. Selecting the Identity Store for Authentication with authconfig
Chapter 3. Selecting the Identity Store for Authentication with authconfig The Identity & Authentication tab in the authconfig UI sets how users should be authenticated. The default is to use local system authentication, meaning the users and their passwords are checked against local system accounts. A Red Hat Enterprise Linux machine can also use external resources which contain the users and credentials, including LDAP, NIS, and Winbind. 3.1. IPAv2 There are two different ways to configure an Identity Management server as an identity back end. For IdM version 2 (Red Hat Enterprise Linux version 6.3 and earlier), version 3 (in Red Hat Enterprise Linux 6.4 and later), and version 4 (in Red Hat Enterprise Linux 7.1 and later), these are configured as IPAv2 providers in authconfig . For IdM versions and for community FreeIPA servers, these are configured as LDAP providers. 3.1.1. Configuring IdM from the UI Open the authconfig UI. Select IPAv2 in the User Account Database drop-down menu. Figure 3.1. Authentication Configuration Set the information that is required to connect to the IdM server. IPA Domain gives the DNS domain of the IdM domain. IPA Realm gives the Kerberos domain of the IdM domain. IPA Server gives the host name of any IdM server within the IdM domain topology. Do not configure NTP optionally disables NTP services when the client is configured. This is usually not recommended, because the IdM server and all clients need to have synchronized clocks for Kerberos authentication and certificates to work properly. This could be disabled if the IdM servers are using a different NTP server rather than hosting it within the domain. Click the Join the domain button. This runs the ipa-client-install command and, if necessary, installs the ipa-client packages. The installation script automatically configures all system files that are required for the local system and contacts the domain servers to update the domain information. 3.1.2. Configuring IdM from the Command Line An IdM domain centralizes several common and critical services in a single hierarchy, most notably DNS and Kerberos. authconfig (much like realmd in Chapter 8, Using realmd to Connect to an Identity Domain ) can be used to enroll a system in the IdM domain. That runs the ipa-client-install command and, if necessary, installs the ipa-client packages. The installation script automatically configures all system files that are required for the local system and contacts the domain servers to update the domain information. Joining a domain requires three pieces of information to identify the domain: the DNS domain name ( --ipav2domain ), the Kerberos realm name ( --ipav2realm ), and the IdM server to contact ( --ipav2server ). The --ipav2join option gives the administrator user name to use to connect to the IdM server; this is typically admin . If the IdM domain is not running its own NTP services, then it is possible to use the --disableipav2nontp option to prevent the setup script to use the IdM server as the NTP server. This is generally not recommended, because the IdM server and all clients need to have synchronized clocks for Kerberos authentication and certificates to work properly.
[ "authconfig --enableipav2 --ipav2domain=IPAEXAMPLE --ipav2realm=IPAEXAMPLE --ipav2server=ipaexample.com --ipav2join=admin" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/setting-auth-stores
Creating skills and knowledge YAML files
Creating skills and knowledge YAML files Red Hat Enterprise Linux AI 1.4 Guidelines on creating skills and knowledge YAML files Red Hat RHEL AI Documentation Team
[ "seed_examples: - context: questions_and_answers:", "questions_and_answers: - question: answer:", "document: repo: commit: patterns:", "taxonomy/knowledge/technical_documents/product_customer_cases/qna.yaml", "ilab taxonomy diff", "knowledge/technical_documents/product_customer_cases/qna.yaml Taxonomy in /taxonomy/ is valid :)", "9:15 error syntax error: mapping values are not allowed here (syntax) Reading taxonomy failed with the following error: 1 taxonomy with errors! Exiting.", "version: 3 1 domain: astronomy 2 document_outline: | 3 Information about the Phoenix Constellation including the history, characteristics, and features of the stars in the constellation. created_by: <user-name> 4 seed_examples: - context: | 5 **Phoenix** is a minor constellation in the southern sky. Named after the mythical Phoenix_(mythology), it was first depicted on a celestial atlas by Johann Bayerin his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. The constellation stretches from roughly −39 degrees to −57 degrees declination, and from 23.5h to 2.5h of right ascension. The constellations Phoenix, Grus, Pavo and Tucana are known as the Southern Birds. questions_and_answers: - question: | 6 What is the Phoenix constellation? answer: | 7 The Phoenix constellation is a minor constellation in the southern sky. - question: | Who charted the Phoenix constellation? answer: | The Phoenix constellation was charted by french explorer and astronomer Nicolas Louis de Lacaille. - question: | How far does the Phoenix constellation stretch? answer: | The phoenix constellation stretches from roughly −39deg to −57deg declination, and from 23.5h to 2.5h of right ascension. - context: | Phoenix was the largest of the 12 constellations established by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer *Uranometria* of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name *Den voghel Fenicx*, \"The Bird Phoenix\", symbolising the phoenix of classical mythology. One name of the brightest star Alpha Phoenicis-Ankaa-is derived from the Arabic: ?l`nq??, romanized: al-'anqa', lit. 'the phoenix', and was coined sometime after 1800 in relation to the constellation. questions_and_answers: - question: | What is the brightest star in the Phoenix constellation called? answer: | Alpha Phoenicis or Ankaa is the brightest star in the Phoenix Constellation. - question: Where did the Phoenix constellation first appear? answer: | The Phoenix constellation first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. - question: | What does \"The Bird Phoenix\" symbolize? answer: | \"The Bird Phoenix\" symbolizes the phoenix of classical mythology. - context: | Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is \"Phe\". The official constellation boundaries, as set by Belgian astronomer Eugene Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 23<sup>h</sup> 26.5<sup>m</sup> and 02<sup>h</sup> 25.0<sup>m</sup>, while the declination coordinates are between −39.31deg and −57.84deg. This means it remains below the horizon to anyone living north of the 40th parallel in the Northern Hemisphere, and remains low in the sky for anyone living north of the equator. It is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, Fomalhaut and Beta Ceti-Ankaa lies roughly in the centre of this. questions_and_answers: - question: What are the characteristics of the Phoenix constellation? answer: | Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. - question: | When is the phoenix constellation most visible? answer: | Phoenix is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. - question: | What are the Phoenix Constellation boundaries? answer: | The official constellation boundaries for Phoenix, as set by Belgian astronomer Eugene Delporte in 1930, are defined by a polygon of 10 segments. - context: | Ten stars have been found to have planets to date, and four planetary systems have been discovered with the SuperWASP project. HD 142 is a yellow giant that has an apparent magnitude of 5.7, and has a planet (HD 142b) 1.36 times the mass of Jupiter which orbits every 328 days. HD 2039 is a yellow subgiant with an apparent magnitude of 9.0 around 330 light years away which has a planet (HD 2039) six times the mass of Jupiter. WASP-18 is a star of magnitude 9.29 which was discovered to have a hot Jupiter-like planet (WASP-18b) taking less than a day to orbit the star. The planet is suspected to be causing WASP-18 to appear older than it really is. WASP-4and WASP-5 are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. WASP-29 is an orange dwarf of spectral type K4V and visual magnitude 11.3, which has a planetary companion of similar size and mass to Saturn. The planet completes an orbit every 3.9 days. questions_and_answers: - question: In the Phoenix constellation, how many stars have planets? answer: | In the Phoenix constellation, ten stars have been found to have planets to date, and four planetary systems have been discovered with the SuperWASP project. - question: | What is HD 142? answer: | HD 142 is a yellow giant that has an apparent magnitude of 5.7, and has a planet (HD 142 b) 1.36 times the mass of Jupiter which orbits every 328 days. - question: | Are WASP-4 and WASP-5 solar-type yellow stars? answer: | Yes, WASP-4 and WASP-5 are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. - context: | The constellation does not lie on the galactic plane of the Milky Way, and there are no prominent star clusters. NGC 625 is a dwarf irregular galaxy of apparent magnitude 11.0 and lying some 12.7 million light years distant. Only 24000 light years in diameter, it is an outlying member of the Sculptor Group. NGC 625 is thought to have been involved in a collision and is experiencing a burst of active star formation. NGC 37 is a lenticular galaxy of apparent magnitude 14.66. It is approximately 42 kiloparsecs 137,000 light-years in diameter and about 12.9 billion years old. Robert's Quartet composed of the irregular galaxy NGC 87, and three spiral galaxies NGC 88, NGC 89 and NGC 92 is a group of four galaxies located around 160 million light-years away which are in the process of colliding and merging. They are within a circle of radius of 1.6 arcmin, corresponding to about 75,000 light-years. Located in the galaxy ESO 243-49 is HLX-1, an intermediate-mass black hole-the first one of its kind identified. It is thought to be a remnant of a dwarf galaxy that was absorbed in a collision with ESO 243-49. Before its discovery, this class of black hole was only hypothesized. questions_and_answers: - question: | Is the Phoenix Constellation part of the Milky Way? answer: | The Phoenix constellation does not lie on the galactic plane of the Milky Way, and there are no prominent star clusters. - question: | How many light years away is NGC 625? answer: | NGC 625 is 24000 light years in diameter and is an outlying member of the Sculptor Group. - question: | What is Robert's Quartet composed of? answer: | Robert's Quartet is composed of the irregular galaxy NGC 87, and three spiral galaxies NGC 88, NGC 89 and NGC 92. document: repo: https://github.com/<profile>/<repo-name> / 8 commit: <commit hash> 9 patterns: - phoenix_constellation.md 10 - phoenix_history.md", "Phoenix (constellation) **Phoenix** is a minor constellation in the southern sky. Named after the mythical phoenix, it was first depicted on a celestial atlas by Johann Bayer in his 1603 *Uranometria*. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. The constellation stretches from roughly −39 degrees to −57 degrees declination, and from 23.5h to 2.5h of right ascension. The constellations Phoenix, Grus , Pavo and Tucana, are known as the Southern Birds. The brightest star, Alpha Phoenicis, is named Ankaa, an Arabic word meaning 'the Phoenix'. It is an orange giant of apparent magnitude 2.4. Next is Beta Phoenicis, actually a binary system composed of two yellow giants with a combined apparent magnitude of 3.3. Nu Phoenicis has a dust disk, while the constellation has ten star systems with known planets and the recently discovered galaxy clusters El Gordo and the Phoenix Cluster-located 7.2 and 5.7 billion light years away respectively, two of the largest objects in the visible universe. Phoenix is the radiant of two annual meteor showers: the Phoenicids in December, and the July Phoenicids. ## History Phoenix was the largest of the 12 constellations established by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's*Uranometria* of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name *Den voghel Fenicx*, \"The Bird Phoenix\", symbolizing the phoenix of classical mythology. One name of the brightest star Alpha Phoenicis-Ankaa-is derived from the Arabic: ?l`nq??, romanized: al-'anqa', lit. 'the phoenix', and was coined sometime after 1800 in relation to the constellation. Celestial historian Richard Allen noted that unlike the other constellations introduced by Plancius and La Caille, Phoenix has actual precedent in ancient astronomy, as the Arabs saw this formation as representing young ostriches, *Al Ri'al*, or as a griffin or eagle. In addition, the same group of stars was sometimes imagined by the Arabs as a boat, *Al Zaurak*, on the nearby river Eridanus. He observed, \"the introduction of a Phoenix into modern astronomy was, in a measure, by adoption rather than by invention.\" The Chinese incorporated Phoenix's brightest star, Ankaa (Alpha Phoenicis), and stars from the adjacent constellation Sculptor to depict *Bakui*, a net for catching birds. Phoenix and the neighboring constellation of Grus together were seen by Julius Schiller as portraying Aaron the High Priest. These two constellations, along with nearby Pavo and Tucana, are called the Southern Birds. ## Characteristics Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is \"Phe\". The official constellation boundaries, as set by Belgian astronomer Eugene Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 23<sup>h</sup> 26.5<sup>m</sup> and 02<sup>h</sup> 25.0<sup>m</sup>, while the declination coordinates are between −39.31deg and −57.84deg. This means it remains below the horizon to anyone living north of the 40th parallel in the Northern Hemisphere, and remains low in the sky for anyone living north of the equator. It is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, Fomalhaut and Beta Ceti-Ankaa lies roughly in the centre of this.", "taxonomy/compositional_skills/grounded/<add_example>/qna.yaml", "ilab taxonomy diff", "compositional_skills/writing/freeform/<example>/qna.yaml Taxonomy in /taxonomy/ is valid :)", "6:11 error syntax error: mapping values are not allowed here (syntax) Reading taxonomy failed with the following error: 1 taxonomy with errors! Exiting.", "version: 2 1 created_by: <user-name> 2 task_description: 'Teach the model how to rhyme.' 3 seed_examples: - question: What are 5 words that rhyme with horn? 4 answer: warn, torn, born, thorn, and corn. 5 - question: What are 5 words that rhyme with cat? answer: bat, gnat, rat, vat, and mat. - question: What are 5 words that rhyme with poor? answer: door, shore, core, bore, and tore. - question: What are 5 words that rhyme with bank? answer: tank, rank, prank, sank, and drank. - question: What are 5 words that rhyme with bake? answer: wake, lake, steak, make, and quake.", "version: 2 1 created_by: <user-name> 2 task_description: This skill provides the ability to read a markdown-formatted table. 3 seed_examples: - context: | 4 | **Breed** | **Size** | **Barking** | **Energy** | |----------------|--------------|-------------|------------| | Afghan Hound | 25-27 in | 3/5 | 4/5 | | Labrador | 22.5-24.5 in | 3/5 | 5/5 | | Cocker Spaniel | 14.5-15.5 in | 3/5 | 4/5 | | Poodle (Toy) | <= 10 in | 4/5 | 4/5 | question: | 5 Which breed has the most energy? answer: | 6 The breed with the most energy is the Labrador. - context: | | **Name** | **Date** | **Color** | **Letter** | **Number** | |----------|----------|-----------|------------|------------| | George | Mar 5 | Green | A | 1 | | Grainne | Dec 31 | Red | B | 2 | | Abigail | Jan 17 | Yellow | C | 3 | | Bhavna | Apr 29 | Purple | D | 4 | | Remy | Sep 9 | Blue | E | 5 | question: | What is Grainne's letter and what is her color? answer: | Grainne's letter is B and her color is red. - context: | | Banana | Apple | Blueberry | Strawberry | |--------|------------|-----------|------------| | Yellow | Red, Green | Blue | Red | | Large | Medium | Small | Small | | Peel | Peel | No peel | No peel | question: | Which fruit is blue, small, and has no peel? answer: | The blueberry is blue, small, and has no peel.", "- question: How many eggs are needed to make roughly 24 chocolate chip cookies? answer: You need around two eggs to make 24 chocolate chip cookies." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html-single/creating_skills_and_knowledge_yaml_files/index
Chapter 12. Upgrading
Chapter 12. Upgrading For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator's new version. 12.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/dist-tracing-otel-updating
Chapter 6. Configuring metrics for the monitoring stack
Chapter 6. Configuring metrics for the monitoring stack As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks: Create a Prometheus ServiceMonitor CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. 6.1. Configuration for sending metrics to the monitoring stack You can configure the OpenTelemetryCollector custom resource (CR) to create a Prometheus ServiceMonitor CR or a PodMonitor CR for a sidecar deployment. A ServiceMonitor can scrape Collector's internal metrics endpoint and Prometheus exporter metrics endpoints. Example of the OpenTelemetry Collector CR with the Prometheus exporter apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: ":8888" pipelines: metrics: exporters: [prometheus] 1 Configures the Red Hat build of OpenTelemetry Operator to create the Prometheus ServiceMonitor CR or PodMonitor CR to scrape the Collector's internal metrics endpoint and the Prometheus exporter metrics endpoints. Note Setting enableMetrics to true creates the following two ServiceMonitor instances: One ServiceMonitor instance for the <instance_name>-collector-monitoring service. This ServiceMonitor instance scrapes the Collector's internal metrics. One ServiceMonitor instance for the <instance_name>-collector service. This ServiceMonitor instance scrapes the metrics exposed by the Prometheus exporter instances. Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping. Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job 1 The name of the OpenTelemetry Collector CR. 2 The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics . 3 The name of the Prometheus exporter port for the OpenTelemetry Collector. 6.2. Configuration for receiving metrics from the monitoring stack A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: "true" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__="<metric_name>"}' 4 metrics_path: '/federate' static_configs: - targets: - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] 1 Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data. 2 Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver. 3 Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack. 4 Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint. 5 Configures the debug exporter to print the metrics to the standard output. 6.3. Additional resources Querying metrics by using the federation endpoint for Prometheus
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/otel-configuring-metrics-for-monitoring-stack
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.11 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules FIPS cryptography Disk encryption Chrony time service About the OpenShift Update Service 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS mode or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. Additional resources FIPS cryptography 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 8 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to Nodes Installation configuration parameters - see fips Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.11.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7 and 8 ( ubi7/ubi and ubi8/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal and ubi8/ubi-mimimal ). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.11 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
[ "variant: openshift version: 4.11.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/security_and_compliance/container-security-1
Role APIs
Role APIs OpenShift Container Platform 4.18 Reference guide for role APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/role_apis/index
Chapter 4. Profile [tuned.openshift.io/v1]
Chapter 4. Profile [tuned.openshift.io/v1] Description Profile is a specification for a Profile resource. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. 4.1.1. .spec Description Type object Required config Property Type Description config object 4.1.2. .spec.config Description Type object Required tunedProfile Property Type Description debug boolean option to debug TuneD daemon execution providerName string Name of the cloud provider as taken from the Node providerID: <ProviderName>://<ProviderSpecificNodeID> tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf tunedProfile string TuneD profile to apply 4.1.3. .spec.config.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 4.1.4. .status Description ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. Type object Required tunedProfile Property Type Description bootcmdline string kernel parameters calculated by tuned for the active Tuned profile; this field is OBSOLETE and will be removed, see OCPBUGS-19351 conditions array conditions represents the state of the per-node Profile application conditions[] object ProfileStatusCondition represents a partial state of the per-node Profile application. tunedProfile string the current profile in use by the Tuned daemon 4.1.5. .status.conditions Description conditions represents the state of the per-node Profile application Type array 4.1.6. .status.conditions[] Description ProfileStatusCondition represents a partial state of the per-node Profile application. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 4.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/profiles GET : list objects of kind Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles DELETE : delete collection of Profile GET : list objects of kind Profile POST : create a Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} DELETE : delete a Profile GET : read the specified Profile PATCH : partially update the specified Profile PUT : replace the specified Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status GET : read status of the specified Profile PATCH : partially update status of the specified Profile PUT : replace status of the specified Profile 4.2.1. /apis/tuned.openshift.io/v1/profiles Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Profile Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty 4.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles Table 4.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Profile Table 4.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Profile Table 4.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty HTTP method POST Description create a Profile Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.10. Body parameters Parameter Type Description body Profile schema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 202 - Accepted Profile schema 401 - Unauthorized Empty 4.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} Table 4.12. Global path parameters Parameter Type Description name string name of the Profile namespace string object name and auth scope, such as for teams and projects Table 4.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Profile Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.15. Body parameters Parameter Type Description body DeleteOptions schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Profile Table 4.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.18. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Profile Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Patch schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Profile Table 4.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.23. Body parameters Parameter Type Description body Profile schema Table 4.24. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty 4.2.4. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status Table 4.25. Global path parameters Parameter Type Description name string name of the Profile namespace string object name and auth scope, such as for teams and projects Table 4.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Profile Table 4.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.28. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Profile Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body Patch schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Profile Table 4.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.33. Body parameters Parameter Type Description body Profile schema Table 4.34. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/node_apis/profile-tuned-openshift-io-v1
probe::signal.handle
probe::signal.handle Name probe::signal.handle - Signal handler being invoked Synopsis signal.handle Values name Name of the probe point sig The signal number that invoked the signal handler sinfo The address of the siginfo table ka_addr The address of the k_sigaction table associated with the signal sig_mode Indicates whether the signal was a user-mode or kernel-mode signal sig_code The si_code value of the siginfo signal regs The address of the kernel-mode stack area (deprecated in SystemTap 2.1) oldset_addr The address of the bitmask array of blocked signals (deprecated in SystemTap 2.1) sig_name A string representation of the signal
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-handle