title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
8.9. biosdevname
8.9. biosdevname 8.9.1. RHBA-2013:1638 - biosdevname bug fix and enhancement update Updated biosdevname packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The biosdevname packages contain a udev helper utility which provides an optional convention for naming network interfaces; it assigns names to network interfaces based on their physical location. The utility is disabled by default, except for on a limited set of Dell PowerEdge, C Series and Precision Workstation systems. Note The biosdevname packages have been upgraded to upstream version 0.5.0, which provides a number of bug fixes and enhancements over the version. (BZ# 947841 ) Bug Fix BZ# 1000386 Previously, the addslot() function returned the same "dev->index_in_slot" value for two or more interfaces. As a consequence, more than one network interfaces could be named "renameN". This update restores the logic used to obtain a port number that existed in biosdevname version 0.3.11 and, as a result, all interfaces are named as expected. Users of biosdevname are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/biosdevname
Chapter 2. Setting up Maven locally
Chapter 2. Setting up Maven locally Typical Fuse application development uses Maven to build and manage projects. The following topics describe how to set up Maven locally: Section 2.1, "Preparing to set up Maven" Section 2.2, "Adding Red Hat repositories to Maven" Section 2.3, "Using local Maven repositories" Section 2.4, "Setting Maven mirror using environmental variables or system properties" Section 2.5, "About Maven artifacts and coordinates" 2.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download the latest version of Maven from the Maven download page . Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 2.3, "Using local Maven repositories" . 2.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 2.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 2.4. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 2.4.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 2.4.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 2.4.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 2.4.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 2.5. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element.
[ "<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>jboss-public</id> <name>JBoss Public Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public</url> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>", "mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project", "<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>", "groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version", "<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>", "<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_spring_boot/set-up-maven-locally
Chapter 10. Verifying connectivity to an endpoint
Chapter 10. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 10.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 10.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. 10.3. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 10.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 10.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 10.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 10.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 10.4. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z"
[ "oc get podnetworkconnectivitycheck -n openshift-network-diagnostics", "NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m", "oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml", "apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/verifying-connectivity-endpoint
Security and compliance
Security and compliance OpenShift Container Platform 4.18 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/index
Chapter 12. Volume Snapshots
Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/volume-snapshots_rhodf
Updating Red Hat Satellite
Updating Red Hat Satellite Red Hat Satellite 6.15 Update Satellite Server and Capsule to a new minor release Red Hat Satellite Documentation Team [email protected]
[ "subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15.z", "satellite-maintain upgrade run --target-version 6.15.z", "dnf needs-restarting --reboothint", "reboot", "dnf install 'dnf-command(reposync)'", "[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/rhel8/8/x86_64/baseos/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/rhel8/8/x86_64/appstream/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6.15 for RHEL 8 RPMs x86_64 baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/layered/rhel8/x86_64/satellite/6.15/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-maintenance-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6.15 for RHEL 8 RPMs x86_64 baseurl=_https://satellite.example.com_/pulp/content/_My_Organization_/Library/content/dist/layered/rhel8/x86_64/sat-maintenance/6.15/os enabled=1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1", "hammer organization list", "dnf reposync --delete --disableplugin=foreman-protector --download-metadata --repoid rhel-8-for-x86_64-appstream-rpms --repoid rhel-8-for-x86_64-baseos-rpms --repoid satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --repoid satellite-6.15-for-rhel-8-x86_64-rpms -n -p ~/Satellite-repos", "tar czf Satellite-repos.tgz -C ~ Satellite-repos", "tar zxf Satellite-repos.tgz -C /root", "[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-baseos-rpms enabled=1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-appstream-rpms enabled=1 [satellite-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-6.15-for-rhel-8-x86_64-rpms enabled=1 [satellite-maintenance-6.15-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-maintenance-6.15-for-rhel-8-x86_64-rpms enabled=1", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15.z --whitelist=\"check-upstream-repository,repositories-validate\"", "satellite-maintain upgrade run --target-version 6.15.z --whitelist=\"check-upstream-repository,repositories-setup,repositories-validate\"", "dnf needs-restarting --reboothint", "reboot", "subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15.z", "satellite-maintain upgrade run --target-version 6.15.z", "dnf needs-restarting --reboothint", "reboot" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/updating_red_hat_satellite/index
Chapter 24. Using Podman in HPC environment
Chapter 24. Using Podman in HPC environment You can use Podman with Open MPI (Message Passing Interface) to run containers in a High Performance Computing (HPC) environment. 24.1. Using Podman with MPI The example is based on the ring.c program taken from Open MPI. In this example, a value is passed around by all processes in a ring-like fashion. Each time the message passes rank 0, the value is decremented. When each process receives the 0 message, it passes it on to the process and then quits. By passing the 0 first, every process gets the 0 message and can quit normally. Prerequisites The container-tools module is installed. Procedure Install Open MPI: To activate the environment modules, type: Load the mpi/openmpi-x86_64 module: Optionally, to automatically load mpi/openmpi-x86_64 module, add this line to the .bashrc file: To combine mpirun and podman , create a container with the following definition: Build the container: Start the container. On a system with 4 CPUs this command starts 4 containers: As a result, mpirun starts up 4 Podman containers and each container is running one instance of the ring binary. All 4 processes are communicating over MPI with each other. Additional resources Podman in HPC environments 24.2. The mpirun options The following mpirun options are used to start the container: --mca orte_tmpdir_base /tmp/podman-mpirun line tells Open MPI to create all its temporary files in /tmp/podman-mpirun and not in /tmp . If using more than one node this directory will be named differently on other nodes. This requires mounting the complete /tmp directory into the container which is more complicated. The mpirun command specifies the command to start, the podman command. The following podman options are used to start the container: run command runs a container. --env-host option copies all environment variables from the host into the container. -v /tmp/podman-mpirun:/tmp/podman-mpirun line tells Podman to mount the directory where Open MPI creates its temporary directories and files to be available in the container. --userns=keep-id line ensures the user ID mapping inside and outside the container. --net=host --pid=host --ipc=host line sets the same network, PID and IPC namespaces. mpi-ring is the name of the container. /home/ring is the MPI program in the container. Additional resources Podman in HPC environments
[ "yum install openmpi", ". /etc/profile.d/modules.sh", "module load mpi/openmpi-x86_64", "echo \"module load mpi/openmpi-x86_64\" >> .bashrc", "cat Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum -y install openmpi-devel wget && yum clean all RUN wget https://raw.githubusercontent.com/open-mpi/ompi/master/test/simple/ring.c && /usr/lib64/openmpi/bin/mpicc ring.c -o /home/ring && rm -f ring.c", "podman build --tag=mpi-ring .", "mpirun --mca orte_tmpdir_base /tmp/podman-mpirun podman run --env-host -v /tmp/podman-mpirun:/tmp/podman-mpirun --userns=keep-id --net=host --pid=host --ipc=host mpi-ring /home/ring Rank 2 has cleared MPI_Init Rank 2 has completed ring Rank 2 has completed MPI_Barrier Rank 3 has cleared MPI_Init Rank 3 has completed ring Rank 3 has completed MPI_Barrier Rank 1 has cleared MPI_Init Rank 1 has completed ring Rank 1 has completed MPI_Barrier Rank 0 has cleared MPI_Init Rank 0 has completed ring Rank 0 has completed MPI_Barrier" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_using-podman-in-hpc-environment
14.6. Using Persistent Search
14.6. Using Persistent Search A persistent search is an ldapsearch which remains open even after the initial search results are returned. Important The OpenLDAP client tools with Red Hat Enterprise Linux do not support persistent searches. The server itself, however, does. Other LDAP clients must be used to perform persistent searches. The purpose of a persistent search is to provide a continuous list of changes to the directory entries as well as the complete entries themselves, something like a hybrid search and changelog. Therefore, the search command must specify what entries to return (the search parameters) and what changes cause an entry to be returned (entry change parameters). Persistent searches are especially useful for applications or clients which access the Directory Server and provide two important benefits: Keep a consistent and current local cache. Any client will query local cache before trying to connect to and query the directory. Persistent searches provide the local cache necessary to improve performance for these clients. Automatically initiate directory actions. The persistent cache can be automatically updated as entries are modified, and the persistent search results can display what kind of modification was performed on the entry. Another application can use that output to update entries automatically, such as automatically creating an email account on a mail server for new users or generating a unique user ID number. There are some performance considerations when running persistent searches, as well: The ldapsearch does not send a notification when the client disconnects, and the change notifications are not sent for any changes made while the search is disconnected. This means that the client's cache will not be updated if it is ever disconnected and there is no good way to update the cache with any new, modified, or deleted entries that were changed while it was disconnected. An attacker could open a large number of persistent searches to launch a denial of service attack. A persistent search requires leaving open a TCP connection between the Directory Server and client. This should only be done if the server is configured to allow a lot of client connections and has a way to close idle connections. In the access logs, a persistent search is identified with the tag options=persistent .
[ "[12/Jan/2009:12:51:54.899423510 -0500] conn=19636710736396323 op=0 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(objectClass=person)\" attrs=ALL options=persistent" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/persistent-search
Chapter 3. Enabling user-managed encryption for Azure
Chapter 3. Enabling user-managed encryption for Azure In OpenShift Container Platform version 4.14, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml file, and then complete the installation. 3.1. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the following environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster. 2 Specifies the Azure location where you will create the resource group. Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault you will create. 2 Specifies the name of the encryption key you will create. 3 Specifies the name of the disk encryption set you will create. Set the environment variable for the ID of your Azure Service Principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal you will use for this installation. Enable host-level encryption in Azure by running the following commands: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure key vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 3.2. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure
[ "export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2", "export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3", "export CLUSTER_SP_ID=\"<service_principal_id>\" 1", "az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"", "az feature show --namespace Microsoft.Compute --name EncryptionAtHost", "az provider register -n Microsoft.Compute", "az group create --name USDRESOURCEGROUP --location USDLOCATION", "az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true", "az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software", "KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)", "KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)", "az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL", "DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)", "az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get", "DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)", "az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/enabling-user-managed-encryption-azure
Appendix C. Cluster Connection Configuration Elements
Appendix C. Cluster Connection Configuration Elements The table below lists all of the configuration elements of a cluster-connection . Table C.1. Cluster Connection Configuration Elements Name Description address Each cluster connection applies only to addresses that match the value specified in the address field. If no address is specified, then all addresses will be load balanced. The address field also supports comma separated lists of addresses. Use exclude syntax, ! to prevent an address from being matched. Below are some example addresses: jms.eu Matches all addresses starting with jms.eu . !jms.eu Matches all addresses except for those starting with jms.eu jms.eu.uk,jms.eu.de Matches all addresses starting with either jms.eu.uk or jms.eu.de jms.eu,!jms.eu.uk Matches all addresses starting with jms.eu , but not those starting with jms.eu.uk Note You should not have multiple cluster connections with overlapping addresses (for example, "europe" and "europe.news"), because the same messages could be distributed between more than one cluster connection, possibly resulting in duplicate deliveries. call-failover-timeout Use when a call is made during a failover attempt. The default is -1 , or no timeout. call-timeout When a packet is sent over a cluster connection, and it is a blocking call, call-timeout determines how long the broker will wait (in milliseconds) for the reply before throwing an exception. The default is 30000 . check-period The interval, in milliseconds, between checks to see if the cluster connection has failed to receive pings from another broker. The default is 30000 . confirmation-window-size The size, in bytes, of the window used for sending confirmations from the broker connected to. When the broker receives confirmation-window-size bytes, it notifies its client. The default is 1048576 . A value of -1 means no window. connector-ref Identifies the connector that will be transmitted to other brokers in the cluster so that they have the correct cluster topology. This parameter is mandatory. connection-ttl Determines how long a cluster connection should stay alive if it stops receiving messages from a specific broker in the cluster. The default is 60000 . discovery-group-ref Points to a discovery-group to be used to communicate with other brokers in the cluster. This element must include the attribute discovery-group-name , which must match the name attribute of a previously configured discovery-group . initial-connect-attempts Sets the number of times the system will try to connect a broker in the cluster initially. If the max-retry is achieved, this broker will be considered permanently down, and the system will not route messages to this broker. The default is -1 , which means infinite retries. max-hops Configures the broker to load balance messages to brokers which might be connected to it only indirectly with other brokers as intermediates in a chain. This allows for more complex topologies while still providing message load-balancing. The default value is 1 , which means messages are distributed only to other brokers directly connected to this broker. This parameter is optional. max-retry-interval The maximum delay for retries, in milliseconds. The default is 2000 . message-load-balancing Determines whether and how messages will be distributed between other brokers in the cluster. Include the message-load-balancing element to enable load balancing. The default value is ON_DEMAND . You can provide a value as well. Valid values are: OFF Disables load balancing. STRICT Enable load balancing and forwards messages to all brokers that have a matching queue, whether or not the queue has an active consumer or a matching selector. ON_DEMAND Enables load balancing and ensures that messages are forwarded only to brokers that have active consumers with a matching selector. OFF_WITH_REDISTRIBUTION Disables load balancing but ensures that messages are forwarded only to brokers that have active consumers with a matching selector when no suitable local consumer is available. min-large-message-size If a message size, in bytes, is larger than min-large-message-size , it will be split into multiple segments when sent over the network to other cluster members. The default is 102400 . notification-attempts Sets how many times the cluster connection should broadcast itself when connecting to the cluster. The default is 2 . notification-interval Sets how often, in milliseconds, the cluster connection should broadcast itself when attaching to the cluster. The default is 1000 . producer-window-size The size, in bytes, for producer flow control over cluster connection. By default, it is disabled, but you may want to set a value if you are using really large messages in cluster. A value of -1 means no window. reconnect-attempts Sets the number of times the system will try to reconnect to a broker in the cluster. If the max-retry is achieved, this broker will be considered permanently down and the system will stop routing messages to this broker. The default is -1 , which means infinite retries. retry-interval Determines the interval, in milliseconds, between retry attempts. If the cluster connection is created and the target broker has not been started or is booting, then the cluster connections from other brokers will retry connecting to the target until it comes back up. This parameter is optional. The default value is 500 milliseconds. retry-interval-multiplier The multiplier used to increase the retry-interval after each reconnect attempt. The default is 1. use-duplicate-detection Cluster connections use bridges to link the brokers, and bridges can be configured to add a duplicate ID property in each message that is forwarded. If the target broker of the bridge crashes and then recovers, messages might be resent from the source broker. By setting use-duplicate-detection to true , any duplicate messages will be filtered out and ignored on receipt at the target broker. The default is true .
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/cluster_connection_elements
Specialized hardware and driver enablement
Specialized hardware and driver enablement OpenShift Container Platform 4.16 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc adm release info quay.io/openshift-release-dev/ocp-release:4.16.z-x86_64 --image-for=driver-toolkit", "oc adm release info quay.io/openshift-release-dev/ocp-release:4.16.z-aarch64 --image-for=driver-toolkit", "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b53883ca2bac5925857148c4a1abc300ced96c222498e3bc134fe7ce3a1dd404", "podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>", "oc new-project simple-kmod-demo", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: dockerfile: | ARG DTK FROM USD{DTK} as builder ARG KVER WORKDIR /build/ RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR /build/simple-kmod RUN make all install KVER=USD{KVER} FROM registry.redhat.io/ubi8/ubi-minimal ARG KVER # Required for installing `modprobe` RUN microdnf install kmod COPY --from=builder /lib/modules/USD{KVER}/simple-kmod.ko /lib/modules/USD{KVER}/ COPY --from=builder /lib/modules/USD{KVER}/simple-procfs-kmod.ko /lib/modules/USD{KVER}/ RUN depmod USD{KVER} strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO # USD oc adm release info quay.io/openshift-release-dev/ocp-release:<cluster version>-x86_64 --image-for=driver-toolkit - name: DTK value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34864ccd2f4b6e385705a730864c04a40908e57acede44457a783d739e377cae - name: KVER value: 4.18.0-372.26.1.el8_6.x86_64 output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo", "OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})", "DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)", "sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml", "oc create -f 0000-buildconfig.yaml", "apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [sleep, infinity] lifecycle: postStart: exec: command: [\"modprobe\", \"-v\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] preStop: exec: command: [\"modprobe\", \"-r\", \"-a\" , \"simple-kmod\", \"simple-procfs-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc create -f 1000-drivercontainer.yaml", "oc get pod -n simple-kmod-demo", "NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s", "oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple", "simple_procfs_kmod 16384 0 simple_kmod 16384 0", "apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"", "oc create -f nfd-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd", "oc create -f nfd-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nfd-sub.yaml", "oc project openshift-nfd", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.16 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12", "{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get nodefeaturediscovery nfd-instance -o yaml", "oc get pods -n <nfd_namespace>", "core: sleepInterval: 60s 1", "core: sources: - system - custom", "core: labelWhiteList: '^cpu-cpuid'", "core: noPublish: true 1", "sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]", "sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]", "sources: kernel: kconfigFile: \"/path/to/kconfig\"", "sources: kernel: configOpts: [NO_HZ, X86, DMI]", "sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]", "sources: pci: deviceLabelFields: [class, vendor, device]", "sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]", "sources: pci: deviceLabelFields: [class, vendor]", "source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}", "oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml", "apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3", "podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help", "nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key", "nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml", "nfd-topology-updater -no-publish", "nfd-topology-updater -oneshot -no-publish", "nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock", "nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443", "nfd-topology-updater -server-name-override=localhost", "nfd-topology-updater -sleep-interval=1h", "nfd-topology-updater -watch-namespace=rte", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc apply -f kmm-security-constraint.yaml", "oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller -n openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller 1/1 1 1 97s", "oc edit configmap -n \"USDnamespace\" kmm-operator-manager-config", "healthProbeBindAddress: :8081 job: gcDelay: 1h leaderElection: enabled: true resourceID: kmm.sigs.x-k8s.io webhook: disableHTTP2: true # CVE-2023-44487 port: 9443 metrics: enableAuthnAuthz: true disableHTTP2: true # CVE-2023-44487 bindAddress: 0.0.0.0:8443 secureServing: true worker: runAsUser: 0 seLinuxType: spc_t setFirmwareClassPath: /var/lib/firmware", "oc delete pod -n \"<namespace>\" -l app.kubernetes.io/component=kmm", "oc delete -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default", "spec: moduleLoader: container: modprobe: moduleName: mod_a dirName: /opt firmwarePath: /firmware parameters: - param=1 modulesLoadingOrder: - mod_a - mod_b", "oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]", "spec: moduleLoader: container: modprobe: moduleName: mod_a inTreeModulesToRemove: [mod_a, mod_b]", "spec: moduleLoader: container: kernelMappings: - literal: 6.0.15-300.fc37.x86_64 containerImage: \"some.registry/org/my-kmod:USD{KERNEL_FULL_VERSION}\" inTreeModulesToRemove: [<module_name>, <module_name>]", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder # Build steps # FROM ubi9/ubi ARG KERNEL_FULL_VERSION RUN dnf update && dnf install -y kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ Create the symbolic link RUN ln -s /lib/modules/USD{KERNEL_FULL_VERSION} /opt/lib/modules/USD{KERNEL_FULL_VERSION}/host RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "depmod -b /opt USD{KERNEL_FULL_VERSION}+`.", "apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM registry.redhat.io/ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_FULL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_FULL_VERSION}/build make all FROM ubi9/ubi-minimal ARG KERNEL_FULL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_FULL_VERSION}/ RUN depmod -b /opt USD{KERNEL_FULL_VERSION}", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>", "oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>", "cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64", "cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64", "apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>", "oc apply -f <yaml_filename>", "oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text", "oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d", "--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<module_name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image_name> 2 sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image_name> 3 keySecret: # a secret holding the private secureboot key with the key 'key' name: <private_key_secret_name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate_secret_name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: <namespace> 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi9/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: <namespace> 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: <final_driver_container_name> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private_key_secret_name> certSecret: name: <certificate_secret_name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED 1 value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 2 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>-", "oc label node/<node_name> kmm.node.kubernetes.io/version-module.<module_namespace>.<module_name>=<desired_version>", "ProduceMachineConfig(machineConfigName, machineConfigPoolRef, kernelModuleImage, kernelModuleName string) (string, error)", "kind: MachineConfigPool metadata: name: sfc spec: machineConfigSelector: 1 matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker, sfc]} nodeSelector: 2 matchLabels: node-role.kubernetes.io/sfc: \"\" paused: false maxUnavailable: 1", "metadata: labels: machineconfiguration.opensfhit.io/role: master", "metadata: labels: machineconfiguration.opensfhit.io/role: worker", "modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'", "FROM registry.redhat.io/ubi9/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi9/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-webhook-server", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "oc logs -fn openshift-kmm deployments/kmm-operator-hub-webhook-server", "oc describe modules.kmm.sigs.x-k8s.io kmm-ci-a [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BuildCreated 2m29s kmm Build created for kernel 6.6.2-201.fc39.x86_64 Normal BuildSucceeded 63s kmm Build job succeeded for kernel 6.6.2-201.fc39.x86_64 Normal SignCreated 64s (x2 over 64s) kmm Sign created for kernel 6.6.2-201.fc39.x86_64 Normal SignSucceeded 57s kmm Sign job succeeded for kernel 6.6.2-201.fc39.x86_64", "oc describe node my-node [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- [...] Normal ModuleLoaded 4m17s kmm Module default/kmm-ci-a loaded into the kernel Normal ModuleUnloaded 2s kmm Module default/kmm-ci-a unloaded from the kernel", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc logs -fn openshift-kmm deployments/kmm-operator-controller", "I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGE_MUST_GATHER\")].value}') oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller", "I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/specialized_hardware_and_driver_enablement/index
Chapter 31. VDO Evaluation
Chapter 31. VDO Evaluation 31.1. Introduction VDO is software that provides inline block-level deduplication, compression, and thin provisioning capabilities for primary storage. VDO installs within the Linux device mapper framework, where it takes ownership of existing physical block devices and remaps these to new, higher-level block devices with data-efficiency properties. Specifically, VDO can multiply the effective capacity of these devices by ten or more. These benefits require additional system resources, so it is therefore necessary to measure VDO's impact on system performance. Storage vendors undoubtedly have existing in-house test plans and expertise that they use to evaluate new storage products. Since the VDO layer helps to identify deduplication and compression, different tests may be required. An effective test plan requires studying the VDO architecture and exploring these items: VDO-specific configurable properties (performance tuning end-user applications) Impact of being a native 4 KB block device Response to access patterns and distributions of deduplication and compression Performance in high-load environments (very important) Analyze cost vs. capacity vs. performance, based on application Failure to consider such factors up front has created situations that have invalidated certain tests and required customers to repeat testing and data collection efforts. 31.1.1. Expectations and Deliverables This Evaluation Guide is meant to augment, not replace, a vendor's internal evaluation effort. With a modest investment of time, it will help evaluators produce an accurate assessment of VDO's integration into existing storage devices. This guide is designed to: Help engineers identify configuration settings that elicit optimal responses from the test device Provide an understanding of basic tuning parameters to help avoid product misconfigurations Create a performance results portfolio as a reference to compare against "real" application results Identify how different workloads affect performance and data efficiency Expedite time-to-market with VDO implementations The test results will help Red Hat engineers assist in understanding VDO's behavior when integrated into specific storage environments. OEMs will understand how to design their deduplication and compression capable devices, and also how their customers can tune their applications to best use those devices. Be aware that the procedures in this document are designed to provide conditions under which VDO can be most realistically evaluated. Altering test procedures or parameters may invalidate results. Red Hat Sales Engineers are available to offer guidance when modifying test plans.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-evaluation
2.6. Data Security for Remote Client Server Mode
2.6. Data Security for Remote Client Server Mode 2.6.1. About Security Realms A security realm is a series of mappings between users and passwords, and users and roles. Security realms are a mechanism for adding authentication and authorization to your EJB and Web applications. Red Hat JBoss Data Grid Server provides two security realms by default: ManagementRealm stores authentication information for the Management API, which provides the functionality for the Management CLI and web-based Management Console. It provides an authentication system for managing JBoss Data Grid Server itself. You could also use the ManagementRealm if your application needed to authenticate with the same business rules you use for the Management API. ApplicationRealm stores user, password, and role information for Web Applications and EJBs. Each realm is stored in two files on the filesystem: REALM -users.properties stores usernames and hashed passwords. REALM -roles.properties stores user-to-role mappings. mgmt-groups.properties stores user-to-role mapping file for ManagementRealm . The properties files are stored in the standalone/configuration/ directories. The files are written simultaneously by the add-user.sh or add-user.bat command. When you run the command, the first decision you make is which realm to add your new user to. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.6.2. Add a New Security Realm Run the Management CLI. Start the cli.sh or cli.bat command and connect to the server. Create the new security realm itself. Run the following command to create a new security realm named MyDomainRealm on a domain controller or a standalone server. Create the references to the properties file which will store information about the new role. Run the following command to create a pointer a file named myfile.properties , which will contain the properties pertaining to the new role. Note The newly-created properties file is not managed by the included add-user.sh and add-user.bat scripts. It must be managed externally. Result Your new security realm is created. When you add users and roles to this new realm, the information will be stored in a separate file from the default security realms. You can manage this new file using your own applications or procedures. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.6.3. Add a User to a Security Realm Run the add-user.sh or add-user.bat command. Open a terminal and change directories to the JDG_HOME /bin/ directory. If you run Red Hat Enterprise Linux or another UNIX-like operating system, run add-user.sh . If you run Microsoft Windows Server, run add-user.bat . Choose whether to add a Management User or Application User. For this procedure, type b to add an Application User. Choose the realm the user will be added to. By default, the only available realm is ApplicationRealm . If you have added a custom realm, you can type its name instead. Type the username, password, and roles, when prompted. Type the desired username, password, and optional roles when prompted. Verify your choice by typing yes , or type no to cancel the changes. The changes are written to each of the properties files for the security realm. 23152%2C+Security+Guide-6.608-09-2016+09%3A25%3A50JBoss+Data+Grid+6Documentation6.6.1 Report a bug 2.6.4. Configuring Security Realms Declaratively In Remote Client-Server mode, a Hot Rod endpoint must specify a security realm. The security realm declares an authentication and an authorization section. Example 2.10. Configuring Security Realms Declaratively The server-identities parameter can also be used to specify certificates. Report a bug 2.6.5. Loading Roles from LDAP for Authorization (Remote Client-Server Mode) An LDAP directory contains entries for user accounts and groups, cross referenced by attributes. Depending on the LDAP server configuration, a user entity may map the groups the user belongs to through memberOf attributes; a group entity may map which users belong to it through uniqueMember attributes; or both mappings may be maintained by the LDAP server. Users generally authenticate against the server using a simple user name. When searching for group membership information, depending on the directory server in use, searches could be performed using this simple name or using the distinguished name of the user's entry in the directory. The authentication step of a user connecting to the server always happens first. Once the user is successfully authenticated the server loads the user's groups. The authentication step and the authorization step each require a connection to the LDAP server. The realm optimizes this process by reusing the authentication connection for the group loading step. As will be shown within the configuration steps below it is possible to define rules within the authorization section to convert a user's simple user name to their distinguished name. The result of a "user name to distinguished name mapping" search during authentication is cached and reused during the authorization query when the force attribute is set to "false". When force is true, the search is performed again during authorization (while loading groups). This is typically done when different servers perform authentication and authorization. Important These examples specify some attributes with their default values. This is done for demonstration. Attributes that specify their default values are removed from the configuration when it is persisted by the server. The exception is the force attribute. It is required, even when set to the default value of false . username-to-dn The username-to-dn element specifies how to map the user name to the distinguished name of their entry in the LDAP directory. This element is only required when both of the following are true: The authentication and authorization steps are against different LDAP servers. The group search uses the distinguished name. 1:1 username-to-dn This specifies that the user name entered by the remote user is the user's distinguished name. This defines a 1:1 mapping and there is no additional configuration. username-filter The option is very similar to the simple option described above for the authentication step. A specified attribute is searched for a match against the supplied user name. The attributes that can be set here are: base-dn : The distinguished name of the context to begin the search. recursive : Whether the search will extend to sub contexts. Defaults to false . attribute : The attribute of the users entry to try and match against the supplied user name. Defaults to uid . user-dn-attribute : The attribute to read to obtain the users distinguished name. Defaults to dn . advanced-filter The final option is to specify an advanced filter, as in the authentication section this is an opportunity to use a custom filter to locate the users distinguished name. For the attributes that match those in the username-filter example, the meaning and default values are the same. There is one new attribute: filter : Custom filter used to search for a user's entry where the user name will be substituted in the {0} place holder. Important The XML must remain valid after the filter is defined so if any special characters are used such as & ensure the proper form is used. For example &amp; for the & character. The Group Search There are two different styles that can be used when searching for group membership information. The first style is where the user's entry contains an attribute that references the groups the user is a member of. The second style is where the group contains an attribute referencing the users entry. When there is a choice of which style to use Red Hat recommends that the configuration for a user's entry referencing the group is used. This is because with this method group information can be loaded by reading attributes of known distinguished names without having to perform any searches. The other approach requires extensive searches to identify the groups that reference the user. Before describing the configuration here are some LDIF examples to illustrate this. Example 2.11. Principal to Group - LDIF example. This example illustrates where we have a user TestUserOne who is a member of GroupOne , GroupOne is in turn a member of GroupFive . The group membership is shown by the use of a memberOf attribute which is set to the distinguished name of the group of which the user (or group) is a member. It is not shown here but a user could potentially have multiple memberOf attributes set, one for each group of which the user is directly a member. Example 2.12. Group to Principal - LDIF Example This example shows the same user TestUserOne who is a member of GroupOne which is in turn a member of GroupFive - however in this case it is an attribute uniqueMember from the group to the user being used for the cross reference. Again the attribute used for the group membership cross reference can be repeated, if you look at GroupFive there is also a reference to another user TestUserFive which is not shown here. General Group Searching Before looking at the examples for the two approaches shown above we first need to define the attributes common to both of these. group-name : This attribute is used to specify the form that should be used for the group name returned as the list of groups of which the user is a member. This can either be the simple form of the group name or the group's distinguished name. If the distinguished name is required this attribute can be set to DISTINGUISHED_NAME . Defaults to SIMPLE . iterative : This attribute is used to indicate if, after identifying the groups a user is a member of, we should also iteratively search based on the groups to identify which groups the groups are a member of. If iterative searching is enabled we keep going until either we reach a group that is not a member if any other groups or a cycle is detected. Defaults to false . Cyclic group membership is not a problem. A record of each search is kept to prevent groups that have already been searched from being searched again. Important For iterative searching to work the group entries need to look the same as user entries. The same approach used to identify the groups a user is a member of is then used to identify the groups of which the group is a member. This would not be possible if for group to group membership the name of the attribute used for the cross reference changes or if the direction of the reference changes. group-dn-attribute : On an entry for a group which attribute is its distinguished name. Defaults to dn . group-name-attribute : On an entry for a group which attribute is its simple name. Defaults to uid . Example 2.13. Principal to Group Example Configuration Based on the example LDIF from above here is an example configuration iteratively loading a user's groups where the attribute used to cross reference is the memberOf attribute on the user. The most important aspect of this configuration is that the principal-to-group element has been added with a single attribute. group-attribute : The name of the attribute on the user entry that matches the distinguished name of the group the user is a member of. Defaults to memberOf . Example 2.14. Group to Principal Example Configuration This example shows an iterative search for the group to principal LDIF example shown above. Here an element group-to-principal is added. This element is used to define how searches for groups that reference the user entry will be performed. The following attributes are set: base-dn : The distinguished name of the context to use to begin the search. recursive : Whether sub-contexts also be searched. Defaults to false . search-by : The form of the role name used in searches. Valid values are SIMPLE and DISTINGUISHED_NAME . Defaults to DISTINGUISHED_NAME . Within the group-to-principal element there is a membership-filter element to define the cross reference. principal-attribute : The name of the attribute on the group entry that references the user entry. Defaults to member . Report a bug 2.6.6. Hot Rod Interface Security 2.6.6.1. Publish Hot Rod Endpoints as a Public Interface Red Hat JBoss Data Grid's Hot Rod server operates as a management interface as a default. To extend its operations to a public interface, alter the value of the interface parameter in the socket-binding element from management to public as follows: Report a bug 2.6.6.2. Encryption of communication between Hot Rod Server and Hot Rod client Hot Rod can be encrypted using TLS/SSL, and has the option to require certificate-based client authentication. Use the following procedure to secure the Hot Rod connector using SSL. Procedure 2.3. Secure Hot Rod Using SSL/TLS Generate a Keystore Create a Java Keystore using the keytool application distributed with the JDK and add your certificate to it. The certificate can be either self signed, or obtained from a trusted CA depending on your security policy. Place the Keystore in the Configuration Directory Put the keystore in the ~/JDG_HOME/standalone/configuration directory with the standalone-hotrod-ssl.xml file from the ~/JDG_HOME/docs/examples/configs directory. Declare an SSL Server Identity Declare an SSL server identity within a security realm in the management section of the configuration file. The SSL server identity must specify the path to a keystore and its secret key. See Section 2.6.7.4, "Configure Hot Rod Authentication (X.509)" for details about these parameters. Add the Security Element Add the security element to the Hot Rod connector as follows: Server Authentication of Certificate If you require the server to perform authentication of the client certificate, create a truststore that contains the valid client certificates and set the require-ssl-client-auth attribute to true . Start the Server Start the server using the following: This will start a server with a Hot Rod endpoint on port 11222. This endpoint will only accept SSL connections. Securing Hot Rod using SSL can also be configured programmatically. Example 2.15. Secure Hot Rod Using SSL/TLS Important To prevent plain text passwords from appearing in configurations or source codes, plain text passwords should be changed to Vault passwords. For more information about how to set up Vault passwords, see the Red Hat Enterprise Application Platform Security Guide . Report a bug 2.6.7. User Authentication over Hot Rod Using SASL User authentication over Hot Rod can be implemented using the following Simple Authentication and Security Layer (SASL) mechanisms: PLAIN is the least secure mechanism because credentials are transported in plain text format. However, it is also the simplest mechanism to implement. This mechanism can be used in conjunction with encryption ( SSL ) for additional security. DIGEST-MD5 is a mechanism than hashes the credentials before transporting them. As a result, it is more secure than the PLAIN mechanism. GSSAPI is a mechanism that uses Kerberos tickets. As a result, it requires a correctly configured Kerberos Domain Controller (for example, Microsoft Active Directory). EXTERNAL is a mechanism that obtains the required credentials from the underlying transport (for example, from a X.509 client certificate) and therefore requires client certificate encryption to work correctly. Report a bug 2.6.7.1. Configure Hot Rod Authentication (GSSAPI/Kerberos) Use the following steps to set up Hot Rod Authentication using the SASL GSSAPI/Kerberos mechanism: Procedure 2.4. Configure SASL GSSAPI/Kerberos Authentication Server-side Configuration The following steps must be configured on the server-side: Define a Kerberos security login module using the security domain subsystem: Ensure that the cache-container has authorization roles defined, and these roles are applied in the cache's authorization block as seen in Section 2.4, "Configuring Red Hat JBoss Data Grid for Authorization" . Configure a Hot Rod connector as follows: The server-name attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. The server-context-name attribute specifies the name of the login context used to retrieve a server subject for certain SASL mechanisms (for example, GSSAPI). The mechanisms attribute specifies the authentication mechanism in use. See Section 2.6.7, "User Authentication over Hot Rod Using SASL" for a list of supported mechanisms. The qop attribute specifies the SASL quality of protection value for the configuration. Supported values for this attribute are auth (authentication), auth-int (authentication and integrity, meaning that messages are verified against checksums to detect tampering), and auth-conf (authentication, integrity, and confidentiality, meaning that messages are also encrypted). Multiple values can be specified, for example, auth-int auth-conf . The ordering implies preference, so the first value which matches both the client and server's preference is chosen. The strength attribute specifies the SASL cipher strength. Valid values are low , medium , and high . The no-anonymous element within the policy element specifies whether mechanisms that accept anonymous login are permitted. Set this value to false to permit and true to deny. Client-side Configuration The following steps must be configured on the client-side: Define a login module in a login configuration file ( gss.conf ) on the client side: Set up the following system properties: Note The krb5.conf file is dependent on the environment and must point to the Kerberos Key Distribution Center. Configure the Hot Rod Client: Report a bug 2.6.7.2. Configure Hot Rod Authentication (MD5) Use the following steps to set up Hot Rod Authentication using the SASL using the MD5 mechanism: Procedure 2.5. Configure Hot Rod Authentication (MD5) Set up the Hot Rod Connector configuration by adding the sasl element to the authentication element (for details on the authentication element, see Section 2.6.4, "Configuring Security Realms Declaratively" ) as follows: The server-name attribute specifies the name that the server declares to incoming clients. The client configuration must also contain the same server name value. The mechanisms attribute specifies the authentication mechanism in use. See Section 2.6.7, "User Authentication over Hot Rod Using SASL" for a list of supported mechanisms. The qop attribute specifies the SASL quality of production value for the configuration. Supported values for this attribute are auth , auth-int , and auth-conf . Connect the client to the configured Hot Rod connector as follows: Report a bug 2.6.7.3. Configure Hot Rod Using LDAP/Active Directory Use the following to configure authentication over Hot Rod using LDAP or Microsoft Active Directory: The following are some details about the elements and parameters used in this configuration: The security-realm element's name parameter specifies the security realm to reference to use when establishing the connection. The authentication element contains the authentication details. The ldap element specifies how LDAP searches are used to authenticate a user. First, a connection to LDAP is established and a search is conducted using the supplied user name to identify the distinguished name of the user. A subsequent connection to the server is established using the password supplied by the user. If the second connection succeeds, the authentication is a success. The connection parameter specifies the name of the connection to use to connect to LDAP. The (optional) recursive parameter specifies whether the filter is executed recursively. The default value for this parameter is false . The base-dn parameter specifies the distinguished name of the context to use to begin the search from. The (optional) user-dn parameter specifies which attribute to read for the user's distinguished name after the user is located. The default value for this parameter is dn . The outbound-connections element specifies the name of the connection used to connect to the LDAP. directory. The ldap element specifies the properties of the outgoing LDAP connection. The name parameter specifies the unique name used to reference this connection. The url parameter specifies the URL used to establish the LDAP connection. The search-dn parameter specifies the distinguished name of the user to authenticate and to perform the searches. The search-credential parameter specifies the password required to connect to LDAP as the search-dn . The (optional) initial-context-factory parameter allows the overriding of the initial context factory. the default value of this parameter is com.sun.jndi.ldap.LdapCtxFactory . Report a bug 2.6.7.4. Configure Hot Rod Authentication (X.509) The X.509 certificate can be installed at the node, and be made available to other nodes for authentication purposes for inbound and outbound SSL connections. This is enabled using the <server-identities/> element of a security realm definition, which defines how a server appears to external applications. This element can be used to configure a password to be used when establishing a remote connection, as well as the loading of an X.509 key. The following example shows how to install an X.509 certificate on the node. In the provided example, the SSL element contains the <keystore/> element, which is used to define how to load the key from the file-based keystore. The following parameters ave available for this element. Table 2.4. <server-identities/> Options Parameter Mandatory/Optional Description path Mandatory This is the path to the keystore, this can be an absolute path or relative to the attribute. relative-to Optional The name of a service representing a path the keystore is relative to. keystore-password Mandatory The password required to open the keystore. alias Optional The alias of the entry to use from the keystore - for a keystore with multiple entries in practice the first usable entry is used but this should not be relied on and the alias should be set to guarantee which entry is used. key-password Optional The password to load the key entry, if omitted the keystore-password will be used instead. Note If the following error occurs, specify a key-password as well as an alias to ensure only one key is loaded. Report a bug
[ "/host=master/core-service=management/security-realm=MyDomainRealm:add()", "/host=master/core-service=management/security-realm=MyDomainRealm/authentication=properties:add(path=myfile.properties)", "<security-realms> <security-realm name=\"ManagementRealm\"> <authentication> <local default-user=\"USDlocal\" skip-group-loading=\"true\"/> <properties path=\"mgmt-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization map-groups-to-roles=\"false\"> <properties path=\"mgmt-groups.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> <security-realm name=\"ApplicationRealm\"> <authentication> <local default-user=\"USDlocal\" allowed-users=\"*\" skip-group-loading=\"true\"/> <properties path=\"application-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization> <properties path=\"application-roles.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> </security-realms>", "<authorization> <ldap connection=\"...\"> <!-- OPTIONAL --> <username-to-dn force=\"true\"> <!-- Only one of the following. --> <username-is-dn /> <username-filter base-dn=\"...\" recursive=\"...\" user-dn-attribute=\"...\" attribute=\"...\" /> <advanced-filter base-dn=\"...\" recursive=\"...\" user-dn-attribute=\"...\" filter=\"...\" /> </username-to-dn> <group-search group-name=\"...\" iterative=\"...\" group-dn-attribute=\"...\" group-name-attribute=\"...\" > <!-- One of the following --> <group-to-principal base-dn=\"...\" recursive=\"...\" search-by=\"...\"> <membership-filter principal-attribute=\"...\" /> </group-to-principal> <principal-to-group group-attribute=\"...\" /> </group-search> </ldap> </authorization>", "<username-to-dn force=\"false\"> <username-is-dn /> </username-to-dn>", "<username-to-dn force=\"true\"> <username-filter base-dn=\"dc=people,dc=harold,dc=example,dc=com\" recursive=\"false\" attribute=\"sn\" user-dn-attribute=\"dn\" /> </username-to-dn>", "<username-to-dn force=\"true\"> <advanced-filter base-dn=\"dc=people,dc=harold,dc=example,dc=com\" recursive=\"false\" filter=\"sAMAccountName={0}\" user-dn-attribute=\"dn\" /> </username-to-dn>", "dn: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne distinguishedName: uid=TestUserOne,ou=users,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=Slashy/Group,ou=groups,dc=principal-to-group,dc=example,dc=org userPassword:: e1NTSEF9WFpURzhLVjc4WVZBQUJNbEI3Ym96UVAva0RTNlFNWUpLOTdTMUE9PQ== dn: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupOne distinguishedName: uid=GroupOne,ou=groups,dc=principal-to-group,dc=example,dc=org memberOf: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org objectClass: extensibleObject objectClass: top objectClass: groupMember objectClass: group objectClass: uidObject uid: GroupFive distinguishedName: uid=GroupFive,ou=subgroups,ou=groups,dc=principal-to-group,dc=example,dc=org", "dn: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: inetOrgPerson objectClass: uidObject objectClass: person objectClass: organizationalPerson cn: Test User One sn: Test User One uid: TestUserOne userPassword:: e1NTSEF9SjR0OTRDR1ltaHc1VVZQOEJvbXhUYjl1dkFVd1lQTmRLSEdzaWc9PQ== dn: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group One uid: GroupOne uniqueMember: uid=TestUserOne,ou=users,dc=group-to-principal,dc=example,dc=org dn: uid=GroupFive,ou=subgroups,ou=groups,dc=group-to-principal,dc=example,dc=org objectClass: top objectClass: groupOfUniqueNames objectClass: uidObject cn: Group Five uid: GroupFive uniqueMember: uid=TestUserFive,ou=users,dc=group-to-principal,dc=example,dc=org uniqueMember: uid=GroupOne,ou=groups,dc=group-to-principal,dc=example,dc=org", "<group-search group-name=\"...\" iterative=\"...\" group-dn-attribute=\"...\" group-name-attribute=\"...\" > </group-search>", "<authorization> <ldap connection=\"LocalLdap\"> <username-to-dn> <username-filter base-dn=\"ou=users,dc=principal-to-group,dc=example,dc=org\" recursive=\"false\" attribute=\"uid\" user-dn-attribute=\"dn\" /> </username-to-dn> <group-search group-name=\"SIMPLE\" iterative=\"true\" group-dn-attribute=\"dn\" group-name-attribute=\"uid\"> <principal-to-group group-attribute=\"memberOf\" /> </group-search> </ldap> </authorization>", "<authorization> <ldap connection=\"LocalLdap\"> <username-to-dn> <username-filter base-dn=\"ou=users,dc=group-to-principal,dc=example,dc=org\" recursive=\"false\" attribute=\"uid\" user-dn-attribute=\"dn\" /> </username-to-dn> <group-search group-name=\"SIMPLE\" iterative=\"true\" group-dn-attribute=\"dn\" group-name-attribute=\"uid\"> <group-to-principal base-dn=\"ou=groups,dc=group-to-principal,dc=example,dc=org\" recursive=\"true\" search-by=\"DISTINGUISHED_NAME\"> <membership-filter principal-attribute=\"uniqueMember\" /> </group-to-principal> </group-search> </ldap> </authorization>", "<socket-binding name=\"hotrod\" interface=\"public\" port=\"11222\" />", "<server-identities> <ssl protocol=\"...\"> <keystore path=\"...\" relative-to=\"...\" keystore-password=\"USD{VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::ENCRYPTED_VALUE}\" /> </ssl> <secret value=\"...\" /> </server-identities>", "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"local\"> <encryption ssl=\"true\" security-realm=\"ApplicationRealm\" require-ssl-client-auth=\"false\" /> </hotrod-connector>", "bin/standalone.sh -c standalone-hotrod-ssl.xml", "package org.infinispan.client.hotrod.configuration; import java.util.Arrays; import javax.net.ssl.KeyManager; import javax.net.ssl.SSLContext; import javax.net.ssl.TrustManager; public class SslConfiguration { private final boolean enabled; private final String keyStoreFileName; private final char[] VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::keyStorePassword; private final SSLContext sslContext; private final String trustStoreFileName; private final char[] VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::trustStorePassword; SslConfiguration(boolean enabled, String keyStoreFileName, char[] keyStorePassword, SSLContext sslContext, String trustStoreFileName, char[] trustStorePassword) { this.enabled = enabled; this.keyStoreFileName = keyStoreFileName; this.keyStorePassword = VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::keyStorePassword; this.sslContext = sslContext; this.trustStoreFileName = trustStoreFileName; this.trustStorePassword = VAULT::VAULT_BLOCK::ATTRIBUTE_NAME::trustStorePassword; } public boolean enabled() { return enabled; } public String keyStoreFileName() { return keyStoreFileName; } public char[] keyStorePassword() { return keyStorePassword; } public SSLContext sslContext() { return sslContext; } public String trustStoreFileName() { return trustStoreFileName; } public char[] trustStorePassword() { return trustStorePassword; } @Override public String toString() { return \"SslConfiguration [enabled=\" + enabled + \", keyStoreFileName=\" + keyStoreFileName + \", sslContext=\" + sslContext + \", trustStoreFileName=\" + trustStoreFileName + \"]\"; } }", "<system-properties> <property name=\"java.security.krb5.conf\" value=\"/tmp/infinispan/krb5.conf\"/> <property name=\"java.security.krb5.debug\" value=\"true\"/> <property name=\"jboss.security.disable.secdomain.option\" value=\"true\"/> </system-properties> <security-domain name=\"infinispan-server\" cache-type=\"default\"> <authentication> <login-module code=\"Kerberos\" flag=\"required\"> <module-option name=\"debug\" value=\"true\"/> <module-option name=\"storeKey\" value=\"true\"/> <module-option name=\"refreshKrb5Config\" value=\"true\"/> <module-option name=\"useKeyTab\" value=\"true\"/> <module-option name=\"doNotPrompt\" value=\"true\"/> <module-option name=\"keyTab\" value=\"/tmp/infinispan/infinispan.keytab\"/> <module-option name=\"principal\" value=\"HOTROD/[email protected]\"/> </login-module> </authentication> </security-domain>", "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"default\"> <authentication security-realm=\"ApplicationRealm\"> <sasl server-name=\"node0\" mechanisms=\"{mechanism_name}\" qop=\"{qop_name}\" strength=\"{value}\"> <policy> <no-anonymous value=\"true\" /> </policy> <property name=\"com.sun.security.sasl.digest.utf8\">true</property> </sasl> </authentication> </hotrod-connector>", "GssExample { com.sun.security.auth.module.Krb5LoginModule required client=TRUE; };", "java.security.auth.login.config=gss.conf java.security.krb5.conf=/etc/krb5.conf", "public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler() { } public MyCallbackHandler (String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } }} LoginContext lc = new LoginContext(\"GssExample\", new MyCallbackHandler(\"krb_user\", \"krb_password\".toCharArra()));lc.login();Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder();clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .socketTimeout(1200000) .security() .authentication() .enable() .serverName(\"infinispan-server\") .saslMechanism(\"GSSAPI\") .clientSubject(clientSubject) .callbackHandler(new MyCallbackHandler());remoteCacheManager = new RemoteCacheManager(clientBuilder.build());RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");", "<hotrod-connector socket-binding=\"hotrod\" cache-container=\"default\"> <authentication security-realm=\"ApplicationRealm\"> <sasl server-name=\"myhotrodserver\" mechanisms=\"DIGEST-MD5\" qop=\"auth\" /> </authentication> </hotrod-connector>", "public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler (String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } }} ConfigurationBuilder clientBuilder = new ConfigurationBuilder();clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .socketTimeout(1200000) .security() .authentication() .enable() .serverName(\"myhotrodserver\") .saslMechanism(\"DIGEST-MD5\") .callbackHandler(new MyCallbackHandler(\"myuser\", \"ApplicationRealm\", \"qwer1234!\".toCharArray()));remoteCacheManager = new RemoteCacheManager(clientBuilder.build());RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");", "<security-realms> <security-realm name=\"ApplicationRealm\"> <authentication> <ldap connection=\"ldap_connection\" recursive=\"true\" base-dn=\"cn=users,dc=infinispan,dc=org\"> <username-filter attribute=\"cn\" /> </ldap> </authentication> </security-realm> </security-realms> <outbound-connections> <ldap name=\"ldap_connection\" url=\"ldap://my_ldap_server\" search-dn=\"CN=test,CN=Users,DC=infinispan,DC=org\" search-credential=\"Test_password\"/> </outbound-connections>", "<security-realm name=\"ApplicationRealm\"> <server-identities> <ssl protocol=\"...\"> <keystore path=\"...\" relative-to=\"...\" keystore-password=\"...\" alias=\"...\" key-password=\"...\" /> </ssl> </server-identities> [... authentication/authorization ...] </security-realms>", "UnrecoverableKeyException: Cannot recover key" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/sect-data_security_for_remote_client_server_mode
Chapter 37. MetadataTemplate schema reference
Chapter 37. MetadataTemplate schema reference Used in: BuildConfigTemplate , DeploymentTemplate , InternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Full list of MetadataTemplate schema properties Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by Streams for Apache Kafka and cannot be configured. 37.1. MetadataTemplate schema properties Property Property type Description labels map Labels added to the OpenShift resource. annotations map Annotations added to the OpenShift resource.
[ "template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-metadatatemplate-reference
Part III. Setting up the subscriptions service for data collection
Part III. Setting up the subscriptions service for data collection To set up the environment for the subscriptions service data collection, connect your Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat Ansible systems to the Hybrid Cloud Console services through one or more data collection tools. After you complete the steps to set up this environment, you can continue with the steps to activate and open the subscriptions service. Do these steps To gather Red Hat Enterprise Linux usage data, complete at least one of the following steps to connect your Red Hat Enterprise Linux systems to the Hybrid Cloud Console by enabling a data collection tool. This connection enables subscription usage data to show in the subscriptions service. Deploy Insights on every RHEL system that is managed by Red Hat Satellite: Deploying Red Hat Insights Ensure that Satellite is configured to manage your RHEL systems and install the Satellite inventory upload plugin: Installing the Satellite inventory upload plugin Ensure that Red Hat Subscription Management is configured to manage your RHEL systems: Registering systems to Red Hat Subscription Management For pay-as-you-go On-Demand subscriptions for metered RHEL, ensure that a cloud integration is configured in the Hybrid Cloud Console for collection of the metering data. Connecting cloud integrations to the subscriptions service To gather Red Hat OpenShift usage data, complete the following step for Red Hat OpenShift data collection on the Hybrid Cloud Console. Set up the connection between Red Hat OpenShift and the subscriptions service based upon the operating system that is used for clusters: Connecting Red Hat OpenShift to the subscriptions service To gather Red Hat Ansible usage data, no additional setup steps are necessary. Ansible data collection is configured automatically during the provisioning of the Ansible control plane.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/assembly-setting-up-subscriptionwatch
4.4. Logical Volume Administration
4.4. Logical Volume Administration This section describes the commands that perform the various aspects of logical volume administration. 4.4.1. Creating Linear Logical Volumes To create a logical volume, use the lvcreate command. If you do not specify a name for the logical volume, the default name lvol # is used where # is the internal number of the logical volume. When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a -free basis. Modifying the logical volume frees and reallocates space in the physical volumes. The following command creates a logical volume 10 gigabytes in size in the volume group vg1 . The default unit for logical volume size is megabytes. The following command creates a 1500 megabyte linear logical volume named testlv in the volume group testvg , creating the block device /dev/testvg/testlv . The following command creates a 50 gigabyte logical volume named gfslv from the free extents in volume group vg0 . You can use the -l argument of the lvcreate command to specify the size of the logical volume in extents. You can also use this argument to specify the percentage of of the size of a related volume group, logical volume, or set of physical volumes. The suffix %VG denotes the total size of the volume group, the suffix %FREE the remaining free space in the volume group, and the suffix %PVS the free space in the specified physical volumes. For a snapshot, the size can be expressed as a percentage of the total size of the origin logical volume with the suffix %ORIGIN (100%ORIGIN provides space for the whole origin). When expressed as a percentage, the size defines an upper limit for the number of logical extents in the new logical volume. The precise number of logical extents in the new LV is not determined until the command has completed. The following command creates a logical volume called mylv that uses 60% of the total space in volume group testvg . The following command creates a logical volume called yourlv that uses all of the unallocated space in the volume group testvg . You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE" size and to use those results as input to the lvcreate command. The following commands create a logical volume called mylv that fills the volume group named testvg . The underlying physical volumes used to create a logical volume can be important if the physical volume needs to be removed, so you may need to consider this possibility when you create the logical volume. For information on removing a physical volume from a volume group, see Section 4.3.7, "Removing Physical Volumes from a Volume Group" . To create a logical volume to be allocated from a specific physical volume in the volume group, specify the physical volume or volumes at the end at the lvcreate command line. The following command creates a logical volume named testlv in volume group testvg allocated from the physical volume /dev/sdg1 , You can specify which extents of a physical volume are to be used for a logical volume. The following example creates a linear logical volume out of extents 0 through 24 of physical volume /dev/sda1 and extents 50 through 124 of physical volume /dev/sdb1 in volume group testvg . The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and then continues laying out the logical volume at extent 100. The default policy for how the extents of a logical volume are allocated is inherit , which applies the same policy as for the volume group. These policies can be changed using the lvchange command. For information on allocation policies, see Section 4.3.1, "Creating Volume Groups" . 4.4.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 2.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64 kilobytes. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-49 of /dev/sda1 and sectors 50-99 of /dev/sdb1 . 4.4.3. RAID Logical Volumes LVM supports RAID0/1/4/5/6/10. Note RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in Section 4.4.4, "Creating Mirrored Volumes" . To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Table 4.1, "RAID Segment Types" describes the possible RAID segment types. Table 4.1. RAID Segment Types Segment type Description raid1 RAID1 mirroring. This is the default value for the --type argument of the lvcreate command when you specify the -m but you do not specify striping. raid4 RAID4 dedicated parity disk raid5 Same as raid5_ls raid5_la RAID5 left asymmetric. Rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric. Rotating parity N with data continuation raid5_ls RAID5 left symmetric. Rotating parity 0 with data restart raid5_rs RAID5 right symmetric. Rotating parity N with data restart raid6 Same as raid6_zr raid6_zr RAID6 zero restart Rotating parity zero (left-to-right) with data restart raid6_nr RAID6 N restart Rotating parity N (left-to-right) with data restart raid6_nc RAID6 N continue Rotating parity N (left-to-right) with data continuation raid10 Striped mirrors. This is the default value for the --type argument of the lvcreate command if you specify the -m and you specify a number of stripes that is greater than 1. Striping of mirror sets raid0/raid0_meta (Red Hat Enterprise Linux 7.3 and later) Striping. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. This is used to increase performance. Logical volume data will be lost if any of the data subvolumes fail. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . For most users, specifying one of the five available primary types ( raid1 , raid4 , raid5 , raid6 , raid10 ) should be sufficient. When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array. For example, creating a 2-way RAID1 array results in two metadata subvolumes ( lv_rmeta_0 and lv_rmeta_1 ) and two data subvolumes ( lv_rimage_0 and lv_rimage_1 ). Similarly, creating a 3-way stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes ( lv_rmeta_0 , lv_rmeta_1 , lv_rmeta_2 , and lv_rmeta_3 ) and 4 data subvolumes ( lv_rimage_0 , lv_rimage_1 , lv_rimage_2 , and lv_rimage_3 ). The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is one gigabyte in size. You can create RAID1 arrays with different numbers of copies according to the value you specify for the -m argument. Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the -i argument . You can also specify the stripe size with the -I argument. The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the volume group my_vg that is one gigabyte in size. Note that you specify the number of stripes just as you do for an LVM striped volume; the correct number of parity drives is added automatically. The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the volume group my_vg that is one gigabyte in size. After you have created a RAID logical volume with LVM, you can activate, change, remove, display, and use the volume just as you would any other LVM logical volume. When you create RAID10 logical volumes, the background I/O required to initialize the logical volumes with a sync operation can crowd out other I/O operations to LVM devices, such as updates to volume group metadata, particularly when you are creating many RAID logical volumes. This can cause the other LVM operations to slow down. You can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. The following command creates a 2-way RAID10 array with 3 stripes that is 10 gigabytes in size with a maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group my_vg . You can also specify minimum and maximum recovery rates for a RAID scrubbing operation. For information on RAID scrubbing, see Section 4.4.3.11, "Scrubbing a RAID Logical Volume" . Note You can generate commands to create logical volumes on RAID storage with the LVM RAID Calculator application. This application uses the information you input about your current or planned storage to generate these commands. The LVM RAID Calculator application can be found at https://access.redhat.com/labs/lvmraidcalculator/ . The following sections describes the administrative tasks you can perform on LVM RAID devices: Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . Section 4.4.3.2, "Converting a Linear Device to a RAID Device" Section 4.4.3.3, "Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume" Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" Section 4.4.3.5, "Resizing a RAID Logical Volume" Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" Section 4.4.3.7, "Splitting off a RAID Image as a Separate Logical Volume" Section 4.4.3.8, "Splitting and Merging a RAID Image" Section 4.4.3.9, "Setting a RAID fault policy" Section 4.4.3.10, "Replacing a RAID device" Section 4.4.3.11, "Scrubbing a RAID Logical Volume" Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.14, "Controlling I/O Operations on a RAID1 Logical Volume" Section 4.4.3.15, "Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later)" 4.4.3.1. Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later) The format for the command to create a RAID0 volume is as follows. Table 4.2. RAID0 Command Creation parameters Parameter Description --type raid0[_meta] Specifying raid0 creates a RAID0 volume without metadata volumes. Specifying raid0_meta creates a RAID0 volume with metadata volumes. Because RAID0 is non-resilient, it does not have to store any mirrored data blocks as RAID1/10 or calculate and store any parity blocks as RAID4/5/6 do. Hence, it does not need metadata volumes to keep state about resynchronization progress of mirrored or parity blocks. Metadata volumes become mandatory on a conversion from RAID0 to RAID4/5/6/10, however, and specifying raid0_meta preallocates those metadata volumes to prevent a respective allocation failure. --stripes Stripes Specifies the number of devices to spread the logical volume across. --stripesize StripeSize Specifies the size of each stripe in kilobytes. This is the amount of data that is written to one device before moving to the device. VolumeGroup Specifies the volume group to use. PhysicalVolumePath ... Specifies the devices to use. If this is not specified, LVM will choose the number of devices specified by the Stripes option, one for each stripe. 4.4.3.2. Converting a Linear Device to a RAID Device You can convert an existing linear logical volume to a RAID device by using the --type argument of the lvconvert command. The following command converts the linear logical volume my_lv in volume group my_vg to a 2-way RAID1 array. Since RAID logical volumes are composed of metadata and data subvolume pairs, when you convert a linear device to a RAID1 array, a new metadata subvolume is created and associated with the original logical volume on (one of) the same physical volumes that the linear volume is on. The additional images are added in metadata/data subvolume pairs. For example, if the original device is as follows: After conversion to a 2-way RAID1 array the device contains the following data and metadata subvolume pairs: If the metadata image that pairs with the original logical volume cannot be placed on the same physical volume, the lvconvert will fail. 4.4.3.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume You can convert an existing RAID1 LVM logical volume to an LVM linear logical volume with the lvconvert command by specifying the -m0 argument. This removes all the RAID data subvolumes and all the RAID metadata subvolumes that make up the RAID array, leaving the top-level RAID1 image as the linear logical volume. The following example displays an existing LVM RAID1 logical volume. The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device. When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of two images: /dev/sda1 and /dev/sdb1 . In this example, the lvconvert command specifies that you want to remove /dev/sda1 , leaving /dev/sdb1 as the physical volume that makes up the linear device. 4.4.3.4. Converting a Mirrored LVM Device to a RAID1 Device You can convert an existing mirrored LVM device with a segment type of mirror to a RAID1 LVM device with the lvconvert command by specifying the --type raid1 argument. This renames the mirror subvolumes ( *_mimage_* ) to RAID subvolumes ( *_rimage_* ). In addition, the mirror log is removed and metadata subvolumes ( *_rmeta_* ) are created for the data subvolumes on the same physical volumes as the corresponding data subvolumes. The following example shows the layout of a mirrored logical volume my_vg/my_lv . The following command converts the mirrored logical volume my_vg/my_lv to a RAID1 logical volume. 4.4.3.5. Resizing a RAID Logical Volume You can resize a RAID logical volume in the following ways; You can increase the size of a RAID logical volume of any type with the lvresize or lvextend command. This does not change the number of RAID images. For striped RAID logical volumes the same stripe rounding constraints apply as when you create a striped RAID logical volume. For more information on extending a RAID volume, see Section 4.4.18, "Extending a RAID Volume" . You can reduce the size of a RAID logical volume of any type with the lvresize or lvreduce command. This does not change the number of RAID images. As with the lvextend command, the same stripe rounding constraints apply as when you create a striped RAID logical volume. For an example of a command to reduce the size of a logical volume, see Section 4.4.16, "Shrinking Logical Volumes" . As of Red Hat Enterprise Linux 7.4, you can change the number of stripes on a striped RAID logical volume ( raid4/5/6/10 ) with the --stripes N parameter of the lvconvert command. This increases or reduces the size of the RAID logical volume by the capacity of the stripes added or removed. Note that raid10 volumes are capable only of adding stripes. This capability is part of the RAID reshaping feature that allows you to change attributes of a RAID logical volume while keeping the same RAID level. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.6. Changing the Number of Images in an Existing RAID1 Device You can change the number of images in an existing RAID1 array just as you can change the number of images in the earlier implementation of LVM mirroring. Use the lvconvert command to specify the number of additional metadata/data subvolume pairs to add or remove. For information on changing the volume configuration in the earlier implementation of LVM mirroring, see Section 4.4.4.4, "Changing Mirrored Volume Configuration" . When you add images to a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to add to the device. You can also optionally specify on which physical volumes the new metadata/data image pairs will reside. Metadata subvolumes (named *_rmeta_* ) always exist on the same physical devices as their data subvolume counterparts *_rimage_* ). The metadata/data subvolume pairs will not be created on the same physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you specify --alloc anywhere ). The format for the command to add images to a RAID1 volume is as follows: For example, the following command displays the LVM device my_vg/my_lv , which is a 2-way RAID1 array: The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device: When you add an image to a RAID1 array, you can specify which physical volumes to use for the image. The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device, specifying that the physical volume /dev/sdd1 be used for the array: To remove images from a RAID1 array, use the following command. When you remove images from a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to remove from the device. You can also optionally specify the physical volumes from which to remove the device. Additionally, when an image and its associated metadata subvolume volume are removed, any higher-numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1 array that consists of lv_rimage_0 , lv_rimage_1 , and lv_rimage_2 , this results in a RAID1 array that consists of lv_rimage_0 and lv_rimage_1 . The subvolume lv_rimage_2 will be renamed and take over the empty slot, becoming lv_rimage_1 . The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv . The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume. The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume, specifying the physical volume that contains the image to remove as /dev/sde1 . 4.4.3.7. Splitting off a RAID Image as a Separate Logical Volume You can split off an image of a RAID logical volume to form a new logical volume. The procedure for splitting off a RAID image is the same as the procedure for splitting off a redundant image of a mirrored logical volume, as described in Section 4.4.4.2, "Splitting Off a Redundant Image of a Mirrored Logical Volume" . The format of the command to split off a RAID image is as follows: Just as when you are removing a RAID image from an existing RAID1 logical volume (as described in Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" ), when you remove a RAID data subvolume (and its associated metadata subvolume) from the middle of the device any higher numbered images will be shifted down to fill the slot. The index numbers on the logical volumes that make up a RAID array will thus be an unbroken sequence of integers. Note You cannot split off a RAID image if the RAID1 array is not yet in sync. The following example splits a 2-way RAID1 logical volume, my_lv , into two linear logical volumes, my_lv and new . The following example splits a 3-way RAID1 logical volume, my_lv , into a 2-way RAID1 logical volume, my_lv , and a linear logical volume, new 4.4.3.8. Splitting and Merging a RAID Image You can temporarily split off an image of a RAID1 array for read-only use while keeping track of any changes by using the --trackchanges argument in conjunction with the --splitmirrors argument of the lvconvert command. This allows you to merge the image back into the array at a later time while resyncing only those portions of the array that have changed since the image was split. The format for the lvconvert command to split off a RAID image is as follows. When you split off a RAID image with the --trackchanges argument, you can specify which image to split but you cannot change the name of the volume being split. In addition, the resulting volumes have the following constraints. The new volume you create is read-only. You cannot resize the new volume. You cannot rename the remaining array. You cannot resize the remaining array. You can activate the new volume and the remaining array independently. You can merge an image that was split off with the --trackchanges argument specified by executing a subsequent lvconvert command with the --merge argument. When you merge the image, only the portions of the array that have changed since the image was split are resynced. The format for the lvconvert command to merge a RAID image is as follows. The following example creates a RAID1 logical volume and then splits off an image from that volume while tracking changes to the remaining array. The following example splits off an image from a RAID1 volume while tracking changes to the remaining array, then merges the volume back into the array. Once you have split off an image from a RAID1 volume, you can make the split permanent by issuing a second lvconvert --splitmirrors command, repeating the initial lvconvert command that split the image without specifying the --trackchanges argument. This breaks the link that the --trackchanges argument created. After you have split an image with the --trackchanges argument, you cannot issue a subsequent lvconvert --splitmirrors command on that array unless your intent is to permanently split the image being tracked. The following sequence of commands splits an image and tracks the image and then permanently splits off the image being tracked. Note, however, that the following sequence of commands will fail. Similarly, the following sequence of commands will fail as well, since the split image is not the image being tracked. 4.4.3.9. Setting a RAID fault policy LVM RAID handles device failures in an automatic fashion based on the preferences defined by the raid_fault_policy field in the lvm.conf file. If the raid_fault_policy field is set to allocate , the system will attempt to replace the failed device with a spare device from the volume group. If there is no available spare device, this will be reported to the system log. If the raid_fault_policy field is set to warn , the system will produce a warning and the log will indicate that a device has failed. This allows the user to determine the course of action to take. As long as there are enough devices remaining to support usability, the RAID logical volume will continue to operate. 4.4.3.9.1. The allocate RAID Fault Policy In the following example, the raid_fault_policy field has been set to allocate in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sde device fails, the system log will display error messages. Since the raid_fault_policy field has been set to allocate , the failed device is replaced with a new device from the volume group. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the raid_fault_policy has been set to allocate but there are no spare devices, the allocation will fail, leaving the logical volume as it is. If the allocation fails, you have the option of fixing the drive, then deactivating and activating the logical volume; this is described in Section 4.4.3.9.2, "The warn RAID Fault Policy" . Alternately, you can replace the failed device, as described in Section 4.4.3.10, "Replacing a RAID device" . 4.4.3.9.2. The warn RAID Fault Policy In the following example, the raid_fault_policy field has been set to warn in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the device has failed you can replace the device with the --repair argument of the lvconvert command, as shown below. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the device failure is a transient failure or you are able to repair the device that failed, you can initiate recovery of the failed device with the --refresh option of the lvchange command. Previously it was necessary to deactivate and then activate the logical volume. The following command refreshes a logical volume. 4.4.3.10. Replacing a RAID device RAID is not like traditional LVM mirroring. LVM mirroring required failed devices to be removed or the mirrored logical volume would hang. RAID arrays can keep on running with failed devices. In fact, for RAID types other than RAID1, removing a device would mean converting to a lower level RAID (for example, from RAID6 to RAID5, or from RAID4 or RAID5 to RAID0). Therefore, rather than removing a failed device unconditionally and potentially allocating a replacement, LVM allows you to replace a device in a RAID volume in a one-step solution by using the --replace argument of the lvconvert command. The format for the lvconvert --replace is as follows. The following example creates a RAID1 logical volume and then replaces a device in that volume. The following example creates a RAID1 logical volume and then replaces a device in that volume, specifying which physical volume to use for the replacement. You can replace more than one RAID device at a time by specifying multiple replace arguments, as in the following example. Note When you specify a replacement drive using the lvconvert --replace command, the replacement drives should never be allocated from extra space on drives already used in the array. For example, lv_rimage_0 and lv_rimage_1 should not be located on the same physical volume. 4.4.3.11. Scrubbing a RAID Logical Volume LVM provides scrubbing support for RAID logical volumes. RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. You initiate a RAID scrubbing operation with the --syncaction option of the lvchange command. You specify either a check or repair operation. A check operation goes over the array and records the number of discrepancies in the array but does not repair them. A repair operation corrects the discrepancies as it finds them. The format of the command to scrub a RAID logical volume is as follows: Note The lvchange --syncaction repair vg/raid_lv operation does not perform the same function as the lvconvert --repair vg/raid_lv operation. The lvchange --syncaction repair operation initiates a background synchronization operation on the array, while the lvconvert --repair operation is designed to repair/replace failed devices in a mirror or RAID logical volume. In support of the new RAID scrubbing operation, the lvs command now supports two new printable fields: raid_sync_action and raid_mismatch_count . These fields are not printed by default. To display these fields you specify them with the -o parameter of the lvs , as follows. The raid_sync_action field displays the current synchronization operation that the raid volume is performing. It can be one of the following values: idle : All sync operations complete (doing nothing) resync : Initializing an array or recovering after a machine failure recover : Replacing a device in the array check : Looking for array inconsistencies repair : Looking for and repairing inconsistencies The raid_mismatch_count field displays the number of discrepancies found during a check operation. The Cpy%Sync field of the lvs command now prints the progress of any of the raid_sync_action operations, including check and repair . The lv_attr field of the lvs command output now provides additional indicators in support of the RAID scrubbing operation. Bit 9 of this field displays the health of the logical volume, and it now supports the following indicators. ( m )ismatches indicates that there are discrepancies in a RAID logical volume. This character is shown after a scrubbing operation has detected that portions of the RAID are not coherent. ( r )efresh indicates that a device in a RAID array has suffered a failure and the kernel regards it as failed, even though LVM can read the device label and considers the device to be operational. The logical volume should be (r)efreshed to notify the kernel that the device is now available, or the device should be (r)eplaced if it is suspected of having failed. For information on the lvs command, see Section 4.8.2, "Object Display Fields" . When you perform a RAID scrubbing operation, the background I/O required by the sync operations can crowd out other I/O operations to LVM devices, such as updates to volume group metadata. This can cause the other LVM operations to slow down. You can control the rate at which the RAID logical volume is scrubbed by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvchange command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. 4.4.3.12. RAID Takeover (Red Hat Enterprise Linux 7.4 and Later) LVM supports Raid takeover , which means converting a RAID logical volume from one RAID level to another (such as from RAID 5 to RAID 6). Changing the RAID level is usually done to increase or decrease resilience to device failures or to restripe logical volumes. You use the lvconvert for RAID takeover. For information on RAID takeover and for examples of using the lvconvert to convert a RAID logical volume, see the lvmraid (7) man page. 4.4.3.13. Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later) RAID reshaping means changing attributes of a RAID logical volume while keeping the same RAID level. Some attributes you can change include RAID layout, stripe size, and number of stripes. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.14. Controlling I/O Operations on a RAID1 Logical Volume You can control the I/O operations for a device in a RAID1 logical volume by using the --writemostly and --writebehind parameters of the lvchange command. The format for using these parameters is as follows. --[raid]writemostly PhysicalVolume [:{t|y|n}] Marks a device in a RAID1 logical volume as write-mostly . All reads to these drives will be avoided unless necessary. Setting this parameter keeps the number of I/O operations to the drive to a minimum. By default, the write-mostly attribute is set to yes for the specified physical volume in the logical volume. It is possible to remove the write-mostly flag by appending :n to the physical volume or to toggle the value by specifying :t . The --writemostly argument can be specified more than one time in a single command, making it possible to toggle the write-mostly attributes for all the physical volumes in a logical volume at once. --[raid]writebehind IOCount Specifies the maximum number of outstanding writes that are allowed to devices in a RAID1 logical volume that are marked as write-mostly . Once this value is exceeded, writes become synchronous, causing all writes to the constituent devices to complete before the array signals the write has completed. Setting the value to zero clears the preference and allows the system to choose the value arbitrarily. 4.4.3.15. Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later) When you create a RAID logical volume, the region size for the logical volume will be the value of the raid_region_size parameter in the /etc/lvm/lvm.conf file. You can override this default value with the -R option of the lvcreate command. After you have created a RAID logical volume, you can change the region size of the volume with the -R option of the lvconvert command. The following example changes the region size of logical volume vg/raidlv to 4096K. The RAID volume must be synced in order to change the region size. 4.4.4. Creating Mirrored Volumes For the Red Hat Enterprise Linux 7.0 release, LVM supports RAID 1/4/5/6/10, as described in Section 4.4.3, "RAID Logical Volumes" . RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in this section. Note For information on converting an existing LVM device with a segment type of mirror to a RAID1 LVM device, see Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" . Note Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume with a segment type of mirror on a single node. However, in order to create a mirrored LVM volume in a cluster, the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking. For an example of creating a mirrored volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster" . Attempting to run multiple LVM mirror creation and conversion commands in quick succession from multiple nodes in a cluster might cause a backlog of these commands. This might cause some of the requested operations to time out and, subsequently, fail. To avoid this issue, it is recommended that cluster mirror creation commands be executed from one node of the cluster. When you create a mirrored volume, you specify the number of copies of the data to make with the -m argument of the lvcreate command. Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the file system. The following command creates a mirrored logical volume with a single mirror. The volume is 50 gigabytes in size, is named mirrorlv , and is carved out of volume group vg0 : An LVM mirror divides the device being copied into regions that, by default, are 512KB in size. You can use the -R argument of the lvcreate command to specify the region size in megabytes. You can also change the default region size by editing the mirror_region_size setting in the lvm.conf file. Note Due to limitations in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well. As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2 . If your mirror size is 3TB, you could specify -R 4 . For a mirror size of 5TB, you could specify -R 8 . The following command creates a mirrored logical volume with a region size of 2MB: When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots and ensures that the mirror does not need to be re-synced every time a machine reboots or crashes. You can specify instead that this log be kept in memory with the --mirrorlog core argument; this eliminates the need for an extra log device, but it requires that the entire mirror be resynchronized at every reboot. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named ondiskmirvol and has a single mirror. The volume is 12MB in size and keeps the mirror log in memory. The mirror log is created on a separate device from the devices on which any of the mirror legs are created. It is possible, however, to create the mirror log on the same device as one of the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a mirror even if you have only two underlying devices. The following command creates a mirrored logical volume with a single mirror for which the mirror log is on the same device as one of the mirror legs. In this example, the volume group vg0 consists of only two devices. This command creates a 500 MB volume named mirrorlv in the vg0 volume group. Note With clustered mirrors, the mirror log management is completely the responsibility of the cluster node with the currently lowest cluster ID. Therefore, when the device holding the cluster mirror log becomes unavailable on a subset of the cluster, the clustered mirror can continue operating without any impact, as long as the cluster node with lowest ID retains access to the mirror log. Since the mirror is undisturbed, no automatic corrective action (repair) is issued, either. When the lowest-ID cluster node loses access to the mirror log, however, automatic action will kick in (regardless of accessibility of the log from other nodes). To create a mirror log that is itself mirrored, you can specify the --mirrorlog mirrored argument. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named twologvol and has a single mirror. The volume is 12MB in size and the mirror log is mirrored, with each log kept on a separate device. Just as with a standard mirror log, it is possible to create the redundant mirror logs on the same device as the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a redundant mirror log even if you do not have sufficient underlying devices for each log to be kept on a separate device than the mirror legs. When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. You can specify which devices to use for the mirror legs and log, and which extents of the devices to use. To force the log onto a particular disk, specify exactly one extent on the disk on which it will be placed. LVM does not necessary respect the order in which devices are listed in the command line. If any physical volumes are listed that is the only space on which allocation will take place. Any physical extents included in the list that are already allocated will get ignored. The following command creates a mirrored logical volume with a single mirror and a single log that is not mirrored. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on device /dev/sda1 , the second leg of the mirror is on device /dev/sdb1 , and the mirror log is on /dev/sdc1 . The following command creates a mirrored logical volume with a single mirror. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on extents 0 through 499 of device /dev/sda1 , the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1 , and the mirror log starts on extent 0 of device /dev/sdc1 . These are 1MB extents. If any of the specified extents have already been allocated, they will be ignored. Note You can combine striping and mirroring in a single logical volume. Creating a logical volume while simultaneously specifying the number of mirrors ( --mirrors X ) and the number of stripes ( --stripes Y ) results in a mirror device whose constituent devices are striped. 4.4.4.1. Mirrored Logical Volume Failure Policy You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When these parameters are set to remove , the system attempts to remove the faulty device and run without it. When these parameters are set to allocate , the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device. This policy acts like the remove policy if no suitable device and space can be allocated for the replacement. By default, the mirror_log_fault_policy parameter is set to allocate . Using this policy for the log is fast and maintains the ability to remember the sync state through crashes and reboots. If you set this policy to remove , when a log device fails the mirror converts to using an in-memory log; in this instance, the mirror will not remember its sync status across crashes and reboots and the entire mirror will be re-synced. By default, the mirror_image_fault_policy parameter is set to remove . With this policy, if a mirror image fails the mirror will convert to a non-mirrored device if there is only one remaining good copy. Setting this policy to allocate for a mirror device requires the mirror to resynchronize the devices; this is a slow process, but it preserves the mirror characteristic of the device. Note When an LVM mirror suffers a device failure, a two-stage recovery takes place. The first stage involves removing the failed devices. This can result in the mirror being reduced to a linear device. The second stage, if the mirror_log_fault_policy parameter is set to allocate , is to attempt to replace any of the failed devices. Note, however, that there is no guarantee that the second stage will choose devices previously in-use by the mirror that had not been part of the failure if others are available. For information on manually recovering from an LVM mirror failure, see Section 6.2, "Recovering from LVM Mirror Failure" . 4.4.4.2. Splitting Off a Redundant Image of a Mirrored Logical Volume You can split off a redundant image of a mirrored logical volume to form a new logical volume. To split off an image, use the --splitmirrors argument of the lvconvert command, specifying the number of redundant images to split off. You must use the --name argument of the command to specify a name for the newly-split-off logical volume. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs. In this example, LVM selects which devices to split off. You can specify which devices to split off. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs consisting of devices /dev/sdc1 and /dev/sde1 . 4.4.4.3. Repairing a Mirrored Logical Device You can use the lvconvert --repair command to repair a mirror after a disk failure. This brings the mirror back into a consistent state. The lvconvert --repair command is an interactive command that prompts you to indicate whether you want the system to attempt to replace any failed devices. To skip the prompts and replace all of the failed devices, specify the -y option on the command line. To skip the prompts and replace none of the failed devices, specify the -f option on the command line. To skip the prompts and still indicate different replacement policies for the mirror image and the mirror log, you can specify the --use-policies argument to use the device replacement policies specified by the mirror_log_fault_policy and mirror_device_fault_policy parameters in the lvm.conf file. 4.4.4.4. Changing Mirrored Volume Configuration You can increase or decrease the number of mirrors that a logical volume contains by using the lvconvert command. This allows you to convert a logical volume from a mirrored volume to a linear volume or from a linear volume to a mirrored volume. You can also use this command to reconfigure other mirror parameters of an existing logical volume, such as corelog . When you convert a linear volume to a mirrored volume, you are creating mirror legs for an existing volume. This means that your volume group must contain the devices and space for the mirror legs and for the mirror log. If you lose a leg of a mirror, LVM converts the volume to a linear volume so that you still have access to the volume, without the mirror redundancy. After you replace the leg, use the lvconvert command to restore the mirror. This procedure is provided in Section 6.2, "Recovering from LVM Mirror Failure" . The following command converts the linear logical volume vg00/lvol1 to a mirrored logical volume. The following command converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg. The following example adds an additional mirror leg to the existing logical volume vg00/lvol1 . This example shows the configuration of the volume before and after the lvconvert command changed the volume to a volume with two mirror legs. 4.4.5. Creating Thinly-Provisioned Logical Volumes Logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned logical volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Note Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node. To create a thin volume, perform the following tasks: Create a volume group with the vgcreate command. Create a thin pool with the lvcreate command. Create a thin volume in the thin pool with the lvcreate command. You can use the -T (or --thin ) option of the lvcreate command to create either a thin pool or a thin volume. You can also use -T option of the lvcreate command to create both a thin pool and a thin volume in that pool at the same time with a single command. The following command uses the -T option of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size. Note that since you are creating a pool of physical space, you must specify the size of the pool. The -T option of the lvcreate command does not take an argument; it deduces what type of device is to be created from the other options the command specifies. The following command uses the -T option of the lvcreate command to create a thin volume named thinvolume in the thin pool vg001/mythinpool . Note that in this case you are specifying virtual size, and that you are specifying a virtual size for the volume that is greater than the pool that contains it. The following command uses the -T option of the lvcreate command to create a thin pool and a thin volume in that pool by specifying both a size and a virtual size argument for the lvcreate command. This command creates a thin pool named mythinpool in the volume group vg001 and it also creates a thin volume named thinvolume in that pool. You can also create a thin pool by specifying the --thinpool parameter of the lvcreate command. Unlike the -T option, the --thinpool parameter requires an argument, which is the name of the thin pool logical volume that you are creating. The following example specifies the --thinpool parameter of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size: Use the following criteria for using chunk size: Smaller chunk size requires more metadata and hinders the performance, but it provides better space utilization with snapshots. Huge chunk size requires less metadata manipulation but makes the snapshot less efficient. LVM2 calculates chunk size in the following manner: By default, LVM starts with a 64KiB chunk size and increases its value when the resulting size of the thin pool metadata device grows above 128MiB, so the metadata size remains compact. This may result in some big chunk size values, which is less efficient for snapshot usage. In this case, the smaller chunk size and bigger metadata size is a better option. If the volume data size is in the range of TiB, use ~15.8GiB metadata size, which is the maximum supported size, and use the chunk size as per your requirement. But it is not possible to increase the metadata size if you need to extend this volume data size and have a small chunk size. Warning Red Hat recommends to use at least the default chunk size. If the chunk size is too small and your volume runs out of space for metadata, the volume is unable to create data. Monitor your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. Ensure that you set up your thin pool with a large enough chunk size so that they do not run out of room for metadata. Striping is supported for pool creation. The following command creates a 100M thin pool named pool in volume group vg001 with two 64 kB stripes and a chunk size of 256 kB. It also creates a 1T thin volume, vg00/thin_lv . You can extend the size of a thin volume with the lvextend command. You cannot, however, reduce the size of a thin pool. The following command resizes an existing thin pool that is 100M in size by extending it another 100M. As with other types of logical volumes, you can rename the volume with the lvrename , you can remove the volume with the lvremove , and you can display information about the volume with the lvs and lvdisplay commands. By default, the lvcreate command sets the size of the thin pool's metadata logical volume according to the formula (Pool_LV_size / Pool_LV_chunk_size * 64). If you will have large numbers of snapshots or if you have small chunk sizes for your thin pool and thus expect significant growth of the size of the thin pool at a later time, you may need to increase the default value of the thin pool's metadata volume with the --poolmetadatasize parameter of the lvcreate command. The supported value for the thin pool's metadata logical volume is in the range between 2MiB and 16GiB. You can use the --thinpool parameter of the lvconvert command to convert an existing logical volume to a thin pool volume. When you convert an existing logical volume to a thin pool volume, you must use the --poolmetadata parameter in conjunction with the --thinpool parameter of the lvconvert to convert an existing logical volume to the thin pool volume's metadata volume. Note Converting a logical volume to a thin pool volume or a thin pool metadata volume destroys the content of the logical volume, since in this case the lvconvert does not preserve the content of the devices but instead overwrites the content. The following example converts the existing logical volume lv1 in volume group vg001 to a thin pool volume and converts the existing logical volume lv2 in volume group vg001 to the metadata volume for that thin pool volume. 4.4.6. Creating Snapshot Volumes Note LVM supports thinly-provisioned snapshots. For information on creating thinly-provisioned snapshot volumes, see Section 4.4.7, "Creating Thinly-Provisioned Snapshot Volumes" . Use the -s argument of the lvcreate command to create a snapshot volume. A snapshot volume is writable. Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. However, if you need to create a consistent backup of data on a clustered logical volume you can activate the volume exclusively and then create the snapshot. For information on activating logical volumes exclusively on one node, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . Note LVM snapshots are supported for mirrored logical volumes. Snapshots are supported for RAID logical volumes. For information on creating RAID logical volumes, see Section 4.4.3, "RAID Logical Volumes" . LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. If you specify a snapshot volume that is larger than this, the system will create a snapshot volume that is only as large as will be needed for the size of the origin. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . The following command creates a snapshot logical volume that is 100 MB in size named /dev/vg00/snap . This creates a snapshot of the origin logical volume named /dev/vg00/lvol1 . If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the file system to run a backup while the original file system continues to get updated. After you create a snapshot logical volume, specifying the origin volume on the lvdisplay command yields output that includes a list of all snapshot logical volumes and their status (active or inactive). The following example shows the status of the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. The lvs command, by default, displays the origin volume and the current percentage of the snapshot volume being used. The following example shows the default output for the lvs command for a system that includes the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. Warning Because the snapshot increases in size as the origin volume changes, it is important to monitor the percentage of the snapshot volume regularly with the lvs command to be sure it does not fill. A snapshot that is 100% full is lost completely, as a write to unchanged parts of the origin would be unable to succeed without corrupting the snapshot. In addition to the snapshot itself being invalidated when full, any mounted file systems on that snapshot device are forcibly unmounted, avoiding the inevitable file system errors upon access to the mount point. In addition, you can specify the snapshot_autoextend_threshold option in the lvm.conf file. This option allows automatic extension of a snapshot whenever the remaining snapshot space drops below the threshold you set. This feature requires that there be unallocated space in the volume group. LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. Similarly, automatic extension of a snapshot will not increase the size of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension. Information on setting snapshot_autoextend_threshold and snapshot_autoextend_percent is provided in the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files . 4.4.7. Creating Thinly-Provisioned Snapshot Volumes Red Hat Enterprise Linux provides support for thinly-provisioned snapshot volumes. For information on the benefits and limitations of thin snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned snapshot volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Important When creating a thin snapshot volume, you do not specify the size of the volume. If you specify a size parameter, the snapshot that will be created will not be a thin snapshot volume and will not use the thin pool for storing data. For example, the command lvcreate -s vg/thinvolume -L10M will not create a thin snapshot, even though the origin volume is a thin volume. Thin snapshots can be created for thinly-provisioned origin volumes, or for origin volumes that are not thinly-provisioned. You can specify a name for the snapshot volume with the --name option of the lvcreate command. The following command creates a thinly-provisioned snapshot volume of the thinly-provisioned logical volume vg001/thinvolume that is named mysnapshot1 . Note When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. For information on extending the size of a thin volume, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" A thin snapshot volume has the same characteristics as any other thin volume. You can independently activate the volume, extend the volume, rename the volume, remove the volume, and even snapshot the volume. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . You can also create a thinly-provisioned snapshot of a non-thinly-provisioned logical volume. Since the non-thinly-provisioned logical volume is not contained within a thin pool, it is referred to as an external origin . External origin volumes can be used and shared by many thinly-provisioned snapshot volumes, even from different thin pools. The external origin must be inactive and read-only at the time the thinly-provisioned snapshot is created. To create a thinly-provisioned snapshot of an external origin, you must specify the --thinpool option. The following command creates a thin snapshot volume of the read-only inactive volume origin_volume . The thin snapshot volume is named mythinsnap . The logical volume origin_volume then becomes the thin external origin for the thin snapshot volume mythinsnap in volume group vg001 that will use the existing thin pool vg001/pool . Because the origin volume must be in the same volume group as the snapshot volume, you do not need to specify the volume group when specifying the origin logical volume. You can create a second thinly-provisioned snapshot volume of the first snapshot volume, as in the following command. As of Red Hat Enterprise Linux 7.2, you can display a list of all ancestors and descendants of a thin snapshot logical volume by specifying the lv_ancestors and lv_descendants reporting fields of the lvs command. In the following example: stack1 is an origin volume in volume group vg001 . stack2 is a snapshot of stack1 stack3 is a snapshot of stack2 stack4 is a snapshot of stack3 Additionally: stack5 is also a snapshot of stack2 stack6 is a snapshot of stack5 Note The lv_ancestors and lv_descendants fields display existing dependencies but do not track removed entries which can break a dependency chain if the entry was removed from the middle of the chain. For example, if you remove the logical volume stack3 from this sample configuration, the display is as follows. As of Red Hat Enterprise Linux 7.3, however, you can configure your system to track and display logical volumes that have been removed, and you can display the full dependency chain that includes those volumes by specifying the lv_ancestors_full and lv_descendants_full fields. For information on tracking, displaying, and removing historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . 4.4.8. Creating LVM Cache Logical Volumes As of the Red Hat Enterprise Linux 7.1 release, LVM provides full support for LVM cache logical volumes. A cache logical volume uses a small logical volume consisting of fast block devices (such as SSD drives) to improve the performance of a larger and slower logical volume by storing the frequently used blocks on the smaller, faster logical volume. LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group. Origin logical volume - the large, slow logical volume Cache pool logical volume - the small, fast logical volume, which is composed of two devices: the cache data logical volume, and the cache metadata logical volume Cache data logical volume - the logical volume containing the data blocks for the cache pool logical volume Cache metadata logical volume - the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume). Cache logical volume - the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components. The following procedure creates an LVM cache logical volume. Create a volume group that contains a slow physical volume and a fast physical volume. In this example. /dev/sde1 is a slow device and /dev/sdf1 is a fast device and both devices are contained in volume group VG . Create the origin volume. This example creates an origin volume named lv that is ten gigabytes in size and that consists of /dev/sde1 , the slow physical volume. Create the cache pool logical volume. This example creates the cache pool logical volume named cpool on the fast device /dev/sdf1 , which is part of the volume group VG . The cache pool logical volume this command creates consists of the hidden cache data logical volume cpool_cdata and the hidden cache metadata logical volume cpool_cmeta . For more complicated configurations you may need to create the cache data and the cache metadata logical volumes individually and then combine the volumes into a cache pool logical volume. For information on this procedure, see the lvmcache (7) man page. Create the cache logical volume by linking the cache pool logical volume to the origin logical volume. The resulting user-accessible cache logical volume takes the name of the origin logical volume. The origin logical volume becomes a hidden logical volume with _corig appended to the original name. Note that this conversion can be done live, although you must ensure you have performed a backup first. Optionally, as of Red Hat Enterprise Linux release 7.2, you can convert the cached logical volume to a thin pool logical volume. Note that any thin logical volumes created from the pool will share the cache. The following command uses the fast device, /dev/sdf1 , for allocating the thin pool metadata ( lv_tmeta ). This is the same device that is used by the cache pool volume, which means that the thin pool metadata volume shares that device with both the cache data logical volume cpool_cdata and the cache metadata logical volume cpool_cmeta . For further information on LVM cache volumes, including additional administrative examples, see the lvmcache (7) man page. For information on creating thinly-provisioned logical volumes, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" . 4.4.9. Merging Snapshot Volumes You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. If both the origin and snapshot volume are not open, the merge will start immediately. Otherwise, the merge will start the first time either the origin or snapshot are activated and both are closed. Merging a snapshot into an origin that cannot be closed, for example a root file system, is deferred until the time the origin volume is activated. When merging starts, the resulting logical volume will have the origin's name, minor number and UUID. While the merge is in progress, reads or writes to the origin appear as they were directed to the snapshot being merged. When the merge finishes, the merged snapshot is removed. The following command merges snapshot volume vg00/lvol1_snap into its origin. You can specify multiple snapshots on the command line, or you can use LVM object tags to specify that multiple snapshots be merged to their respective origins. In the following example, logical volumes vg00/lvol1 , vg00/lvol2 , and vg00/lvol3 are all tagged with the tag @some_tag . The following command merges the snapshot logical volumes for all three volumes serially: vg00/lvol1 , then vg00/lvol2 , then vg00/lvol3 . If the --background option were used, all snapshot logical volume merges would start in parallel. For information on tagging LVM objects, see Appendix D, LVM Object Tags . For further information on the lvconvert --merge command, see the lvconvert (8) man page. 4.4.10. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device is always activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: Use a large minor number to be sure that it has not already been allocated to another device dynamically. If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid the need to set a persistent device number within LVM. 4.4.11. Changing the Parameters of a Logical Volume Group To change the parameters of a logical volume, use the lvchange command. For a listing of the parameters you can change, see the lvchange (8) man page. You can use the lvchange command to activate and deactivate logical volumes. To activate and deactivate all the logical volumes in a volume group at the same time, use the vgchange command, as described in Section 4.3.9, "Changing the Parameters of a Volume Group" . The following command changes the permission on volume lvol1 in volume group vg00 to be read-only. 4.4.12. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . Renaming the root logical volume requires additional reconfiguration. For information on renaming a root volume, see How to rename root volume group or logical volume in Red Hat Enterprise Linux . For more information on activating logical volumes on individual nodes in a cluster, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . 4.4.13. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, unmount the volume before removing it. In addition, in a clustered environment you must deactivate a logical volume before it can be removed. The following command removes the logical volume /dev/testvg/testlv from the volume group testvg . Note that in this case the logical volume has not been deactivated. You could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume. 4.4.14. Displaying Logical Volumes There are three commands you can use to display properties of LVM logical volumes: lvs , lvdisplay , and lvscan . The lvs command provides logical volume information in a configurable form, displaying one line per logical volume. The lvs command provides a great deal of format control, and is useful for scripting. For information on using the lvs command to customize your output, see Section 4.8, "Customized Reporting for LVM" . The lvdisplay command displays logical volume properties (such as size, layout, and mapping) in a fixed format. The following command shows the attributes of lvol2 in vg00 . If snapshot logical volumes have been created for this original logical volume, this command shows a list of all snapshot logical volumes and their status (active or inactive) as well. The lvscan command scans for all logical volumes in the system and lists them, as in the following example. 4.4.15. Growing Logical Volumes To increase the size of a logical volume, use the lvextend command. When you extend the logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it. The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes. The following command adds another gigabyte to the logical volume /dev/myvg/homevol . As with the lvcreate command, you can use the -l argument of the lvextend command to specify the number of extents by which to increase the size of the logical volume. You can also use this argument to specify a percentage of the volume group, or a percentage of the remaining free space in the volume group. The following command extends the logical volume called testlv to fill all of the unallocated space in the volume group myvg . After you have extended the logical volume it is necessary to increase the file system size to match. By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you do not need to worry about specifying the same size for each of the two commands. 4.4.16. Shrinking Logical Volumes You can reduce the size of a logical volume with the lvreduce command. Note Shrinking is not supported on a GFS2 or XFS file system, so you cannot reduce the size of a logical volume that contains a GFS2 or XFS file system. If the logical volume you are reducing contains a file system, to prevent data loss you must ensure that the file system is not using the space in the logical volume that is being reduced. For this reason, it is recommended that you use the --resizefs option of the lvreduce command when the logical volume contains a file system. When you use this option, the lvreduce command attempts to reduce the file system before shrinking the logical volume. If shrinking the file system fails, as can occur if the file system is full or the file system does not support shrinking, then the lvreduce command will fail and not attempt to shrink the logical volume. Warning In most cases, the lvreduce command warns about possible data loss and asks for a confirmation. However, you should not rely on these confirmation prompts to prevent data loss because in some cases you will not see these prompts, such as when the logical volume is inactive or the --resizefs option is not used. Note that using the --test option of the lvreduce command does not indicate where the operation is safe, as this option does not check the file system or test the file system resize. The following command shrinks the logical volume lvol1 in volume group vg00 to be 64 megabytes. In this example, lvol1 contains a file system, which this command resizes together with the logical volume. This example shows the output to the command. Specifying the - sign before the resize value indicates that the value will be subtracted from the logical volume's actual size. The following example shows the command you would use if, instead of shrinking a logical volume to an absolute size of 64 megabytes, you wanted to shrink the volume by a value 64 megabytes. 4.4.17. Extending a Striped Volume In order to increase the size of a striped logical volume, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For example, consider a volume group vg that consists of two underlying physical volumes, as displayed with the following vgs command. You can create a stripe using the entire amount of space in the volume group. Note that the volume group now has no more free space. The following command adds another physical volume to the volume group, which then has 135 gigabytes of additional space. At this point you cannot extend the striped logical volume to the full size of the volume group, because two underlying devices are needed in order to stripe the data. To extend the striped logical volume, add another physical volume and then extend the logical volume. In this example, having added two physical volumes to the volume group we can extend the logical volume to the full size of the volume group. If you do not have enough underlying physical devices to extend the striped logical volume, it is possible to extend the volume anyway if it does not matter that the extension is not striped, which may result in uneven performance. When adding space to the logical volume, the default operation is to use the same striping parameters of the last segment of the existing logical volume, but you can override those parameters. The following example extends the existing striped logical volume to use the remaining free space after the initial lvextend command fails. 4.4.18. Extending a RAID Volume You can grow RAID logical volumes with the lvextend command without performing a synchronization of the new RAID regions. If you specify the --nosync option when you create a RAID logical volume with the lvcreate command, the RAID regions are not synchronized when the logical volume is created. If you later extend a RAID logical volume that you have created with the --nosync option, the RAID extensions are not synchronized at that time, either. You can determine whether an existing logical volume was created with the --nosync option by using the lvs command to display the volume's attributes. A logical volume will show "R" as the first character in the attribute field if it is a RAID volume that was created without an initial synchronization, and it will show "r" if it was created with initial synchronization. The following command displays the attributes of a RAID logical volume named lv that was created without initial synchronization, showing "R" as the first character in the attribute field. The seventh character in the attribute field is "r", indicating a target type of RAID. For information on the meaning of the attribute field, see Table 4.5, "lvs Display Fields" . If you grow this logical volume with the lvextend command, the RAID extension will not be resynchronized. If you created a RAID logical volume without specifying the --nosync option of the lvcreate command, you can grow the logical volume without resynchronizing the mirror by specifying the --nosync option of the lvextend command. The following example extends a RAID logical volume that was created without the --nosync option, indicated that the RAID volume was synchronized when it was created. This example, however, specifies that the volume not be synchronized when the volume is extended. Note that the volume has an attribute of "r", but after executing the lvextend command with the --nosync option the volume has an attribute of "R". If a RAID volume is inactive, it will not automatically skip synchronization when you extend the volume, even if you create the volume with the --nosync option specified. Instead, you will be prompted whether to do a full resync of the extended portion of the logical volume. Note If a RAID volume is performing recovery, you cannot extend the logical volume if you created or extended the volume with the --nosync option specified. If you did not specify the --nosync option, however, you can extend the RAID volume while it is recovering. 4.4.19. Extending a Logical Volume with the cling Allocation Policy When extending an LVM volume, you can use the --alloc cling option of the lvextend command to specify the cling allocation policy. This policy will choose space on the same physical volumes as the last segment of the existing logical volume. If there is insufficient space on the physical volumes and a list of tags is defined in the lvm.conf file, LVM will check whether any of the tags are attached to the physical volumes and seek to match those physical volume tags between existing extents and new extents. For example, if you have logical volumes that are mirrored between two sites within a single volume group, you can tag the physical volumes according to where they are situated by tagging the physical volumes with @site1 and @site2 tags. You can then specify the following line in the lvm.conf file: For information on tagging physical volumes, see Appendix D, LVM Object Tags . In the following example, the lvm.conf file has been modified to contain the following line: Also in this example, a volume group taft has been created that consists of the physical volumes /dev/sdb1 , /dev/sdc1 , /dev/sdd1 , /dev/sde1 , /dev/sdf1 , /dev/sdg1 , and /dev/sdh1 . These physical volumes have been tagged with tags A , B , and C . The example does not use the C tag, but this will show that LVM uses the tags to select which physical volumes to use for the mirror legs. The following command creates a 10 gigabyte mirrored volume from the volume group taft . The following command shows which devices are used for the mirror legs and RAID metadata subvolumes. The following command extends the size of the mirrored volume, using the cling allocation policy to indicate that the mirror legs should be extended using physical volumes with the same tag. The following display command shows that the mirror legs have been extended using physical volumes with the same tag as the leg. Note that the physical volumes with a tag of C were ignored. 4.4.20. Controlling Logical Volume Activation You can flag a logical volume to be skipped during normal activation commands with the -k or --setactivationskip {y|n} option of the lvcreate or lvchange command. This flag is not applied during deactivation. You can determine whether this flag is set for a logical volume with the lvs command, which displays the k attribute as in the following example. By default, thin snapshot volumes are flagged for activation skip. You can activate a logical volume with the k attribute set by using the -K or --ignoreactivationskip option in addition to the standard -ay or --activate y option. The following command activates a thin snapshot logical volume. The persistent "activation skip" flag can be turned off when the logical volume is created by specifying the -kn or --setactivationskip n option of the lvcreate command. You can turn the flag off for an existing logical volume by specifying the -kn or --setactivationskip n option of the lvchange command. You can turn the flag on again with the -ky or --setactivationskip y option. The following command creates a snapshot logical volume without the activation skip flag The following command removes the activation skip flag from a snapshot logical volume. You can control the default activation skip setting with the auto_set_activation_skip setting in the /etc/lvm/lvm.conf file. 4.4.21. Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later) As of Red Hat Enterprise Linux 7.3, you can configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. You can configure your system to retain historical volumes for a defined period of time by specifying the retention time, in seconds, with the lvs_history_retention_time metadata option in the lvm.conf configuration file. A historical logical volume retains a simplified representation of the logical volume that has been removed, including the following reporting fields for the volume: lv_time_removed : the removal time of the logical volume lv_time : the creation time of the logical volume lv_name : the name of the logical volume lv_uuid : the UUID of the logical volume vg_name : the volume group that contains the logical volume. When a volume is removed, the historical logical volume name acquires a hypen as a prefix. For example, when you remove the logical volume lvol1 , the name of the historical volume is -lvol1 . A historical logical volume cannot be reactivated. Even when the record_lvs_history metadata option enabled, you can prevent the retention of historical logical volumes on an individual basis when you remove a logical volume by specifying the --nohistory option of the lvremove command. To include historical logical volumes in volume display, you specify the -H|--history option of an LVM display command. You can display a full thin snapshot dependency chain that includes historical volumes by specifying the lv_full_ancestors and lv_full_descendants reporting fields along with the -H option. The following series of commands provides examples of how you can display and manage historical logical volumes. Ensure that historical logical volumes are retained by setting record_lvs_history=1 in the lvm.conf file. This metadata option is not enabled by default. Enter the following command to display a thin provisioned snapshot chain. In this example: lvol1 is an origin volume, the first volume in the chain. lvol2 is a snapshot of lvol1 . lvol3 is a snapshot of lvol2 . lvol4 is a snapshot of lvol3 . lvol5 is also a snapshot of lvol3 . Note that even though the example lvs display command includes the -H option, no thin snapshot volume has yet been removed and there are no historical logical volumes to display. Remove logical volume lvol3 from the snapshot chain, then run the following lvs command again to see how historical logical volumes are displayed, along with their ancestors and descendants. You can use the lv_time_removed reporting field to display the time a historical volume was removed. You can reference historical logical volumes individually in a display command by specifying the vgname/lvname format, as in the following example. Note that the fifth bit in the lv_attr field is set to h to indicate the volume is a historical volume. LVM does not keep historical logical volumes if the volume has no live descendant. This means that if you remove a logical volume at the end of a snapshot chain, the logical volume is not retained as a historical logical volume. Run the following commands to remove the volume lvol1 and lvol2 and to see how the lvs command displays the volumes once they have been removed. To remove a historical logical volume completely, you can run the lvremove command again, specifying the name of the historical volume that now includes the hyphen, as in the following example. A historical logical volumes is retained as long as there is a chain that includes live volumes in its descendants. This means that removing a historical logical volume also removes all of the logical volumes in the chain if no existing descendant is linked to them, as shown in the following example.
[ "lvcreate -L 10G vg1", "lvcreate -L 1500 -n testlv testvg", "lvcreate -L 50G -n gfslv vg0", "lvcreate -l 60%VG -n mylv testvg", "lvcreate -l 100%FREE -n yourlv testvg", "vgdisplay testvg | grep \"Total PE\" Total PE 10230 lvcreate -l 10230 -n mylv testvg", "lvcreate -L 1500 -n testlv testvg /dev/sdg1", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-24 /dev/sdb1:50-124", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-", "lvcreate -L 50G -i 2 -I 64 -n gfslv vg0", "lvcreate -l 100 -i 2 -n stripelv testvg /dev/sda1:0-49 /dev/sdb1:50-99 Using default stripesize 64.00 KB Logical volume \"stripelv\" created", "lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg", "lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg", "lvcreate --type raid0[_meta] --stripes Stripes --stripesize StripeSize VolumeGroup [ PhysicalVolumePath ...]", "lvconvert --type raid1 -m 1 my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)", "lvconvert --type raid1 -m 1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m0 my_vg/my_lv /dev/sda1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdb1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)", "lvconvert --type raid1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)", "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m + num_additional_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m 2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 56.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m 2 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m - num_fewer_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert -m1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m1 my_vg/my_lv /dev/sde1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdf1(1) [my_lv_rimage_1] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdf1(0) [my_lv_rmeta_1] /dev/sdg1(0)", "lvconvert --splitmirrors count -n splitname vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) new /dev/sdg1(1)", "lvconvert --splitmirrors count --trackchanges vg/lv [ removable_PVs ]", "lvconvert --merge raid_image", "lvcreate --type raid1 -m 2 -L 1G -n my_lv .vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0) lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_2 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_2' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) my_lv_rimage_2 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --merge my_vg/my_lv_rimage_1 my_vg/my_lv_rimage_1 successfully merged back into my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdc1(1) new /dev/sdd1(1)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv Cannot track more than one split image at a time", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --splitmirrors 1 -n new my_vg/my_lv /dev/sdc1 Unable to split additional image from my_lv while tracking changes for my_lv_rimage_1", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "grep lvm /var/log/messages Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, my_vg-my_lv, has failed. Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is not in-sync. Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is now in-sync.", "lvs -a -o name,copy_percent,devices vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdf1(1) [lv_rimage_2] /dev/sdg1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdf1(0) [lv_rmeta_2] /dev/sdg1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdh1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdh1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert --repair my_vg/my_lv /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. LV Copy% Devices my_lv 64.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvchange --refresh my_vg/my_lv", "lvconvert --replace dev_to_remove vg/lv [ possible_replacements ]", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdb2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdb2(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvcreate --type raid1 -m 1 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) pvs PV VG Fmt Attr PSize PFree /dev/sda1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdb1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdc1 my_vg lvm2 a-- 1020.00m 1020.00m /dev/sdd1 my_vg lvm2 a-- 1020.00m 1020.00m lvconvert --replace /dev/sdb1 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvcreate --type raid1 -m 2 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb1 --replace /dev/sdc1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rimage_2] /dev/sde1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0) [my_lv_rmeta_2] /dev/sde1(0)", "lvchange --syncaction {check|repair} vg/raid_lv", "lvs -o +raid_sync_action,raid_mismatch_count vg/lv", "lvconvert -R 4096K vg/raid1 Do you really want to change the region_size 512.00 KiB of LV vg/raid1 to 4.00 MiB? [y/n]: y Changed region size on RAID LV vg/raid1 to 4.00 MiB.", "lvcreate --type mirror -L 50G -m 1 -n mirrorlv vg0", "lvcreate --type mirror -m 1 -L 2T -R 2 -n mirror vol_group", "lvcreate --type mirror -L 12MB -m 1 --mirrorlog core -n ondiskmirvol bigvg Logical volume \"ondiskmirvol\" created", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv -alloc anywhere vg0", "lvcreate --type mirror -L 12MB -m 1 --mirrorlog mirrored -n twologvol bigvg Logical volume \"twologvol\" created", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1:0-499 /dev/sdb1:0-499 /dev/sdc1:0", "lvconvert --splitmirrors 2 --name copy vg/lv", "lvconvert --splitmirrors 2 --name copy vg/lv /dev/sd[ce]1", "lvconvert -m1 vg00/lvol1", "lvconvert -m0 vg00/lvol1", "lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mlog] /dev/sdd1(0) lvconvert -m 2 vg00/lvol1 vg00/lvol1: Converted: 13.0% vg00/lvol1: Converted: 100.0% Logical volume lvol1 converted. lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0),lvol1_mimage_2(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mimage_2] /dev/sdc1(0) [lvol1_mlog] /dev/sdd1(0)", "lvcreate -L 100M -T vg001/mythinpool Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my mythinpool vg001 twi-a-tz 100.00m 0.00", "lvcreate -V 1G -T vg001/mythinpool -n thinvolume Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -L 100M -T vg001/mythinpool -V 1G -n thinvolume Rounding up size to full physical extent 4.00 MiB Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -L 100M --thinpool mythinpool vg001 Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00", "lvcreate -i 2 -I 64 -c 256 -L 100M -T vg00/pool -V 1T --name thin_lv", "lvextend -L+100M vg001/mythinpool Extending logical volume mythinpool to 200.00 MiB Logical volume mythinpool successfully resized lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 200.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvconvert --thinpool vg001/lv1 --poolmetadata vg001/lv2 Converted vg001/lv1 to thin pool.", "lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1", "lvdisplay /dev/new_vg/lvol0 --- Logical volume --- LV Name /dev/new_vg/lvol0 VG Name new_vg LV UUID LBy1Tz-sr23-OjsI-LT03-nHLC-y8XW-EhCl78 LV Write Access read/write LV snapshot status source of /dev/new_vg/newvgsnap1 [active] LV Status available # open 0 LV Size 52.00 MB Current LE 13 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2", "lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20", "lvcreate -s --name mysnapshot1 vg001/thinvolume Logical volume \"mysnapshot1\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mysnapshot1 vg001 Vwi-a-tz 1.00g mythinpool thinvolume 0.00 mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -s --thinpool vg001/pool origin_volume --name mythinsnap", "lvcreate -s vg001/mythinsnap --name my2ndthinsnap", "lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack3,stack4,stack5,stack6 stack2 stack1 stack3,stack4,stack5,stack6 stack3 stack2,stack1 stack4 stack4 stack3,stack2,stack1 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool", "lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack5,stack6 stack2 stack1 stack5,stack6 stack4 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool", "pvcreate /dev/sde1 pvcreate /dev/sdf1 vgcreate VG /dev/sde1 /dev/sdf1", "lvcreate -L 10G -n lv VG /dev/sde1", "lvcreate --type cache-pool -L 5G -n cpool VG /dev/sdf1 Using default stripesize 64.00 KiB. Logical volume \"cpool\" created. lvs -a -o name,size,attr,devices VG LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2)", "lvconvert --type cache --cachepool cpool VG/lv Logical volume cpool is now cached. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g Cwi-a-C--- lv_corig(0) [lv_corig] 10.00g owi-aoC--- /dev/sde1(0) [lvol0_pmspare] 8.00m ewi------- /dev/sdf1(0)", "lvconvert --type thin-pool VG/lv /dev/sdf1 WARNING: Converting logical volume VG/lv to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert VG/lv? [y/n]: y Converted VG/lv to thin pool. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g twi-a-tz-- lv_tdata(0) [lv_tdata] 10.00g Cwi-aoC--- lv_tdata_corig(0) [lv_tdata_corig] 10.00g owi-aoC--- /dev/sde1(0) [lv_tmeta] 12.00m ewi-ao---- /dev/sdf1(1284) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(0) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(1287)", "lvconvert --merge vg00/lvol1_snap", "lvconvert --merge @some_tag", "--persistent y --major major --minor minor", "lvchange -pr vg00/lvol1", "lvrename /dev/vg02/lvold /dev/vg02/lvnew", "lvrename vg02 lvold lvnew", "lvremove /dev/testvg/testlv Do you really want to remove active logical volume \"testlv\"? [y/n]: y Logical volume \"testlv\" successfully removed", "lvdisplay -v /dev/vg00/lvol2", "lvscan ACTIVE '/dev/vg0/gfslv' [1.46 GB] inherit", "lvextend -L12G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 12 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -L+1G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 13 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -l +100%FREE /dev/myvg/testlv Extending logical volume testlv to 68.59 GB Logical volume testlv successfully resized", "lvreduce --resizefs -L 64M vg00/lvol1 fsck from util-linux 2.23.2 /dev/mapper/vg00-lvol1: clean, 11/25688 files, 8896/102400 blocks resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/mapper/vg00-lvol1 to 65536 (1k) blocks. The filesystem on /dev/mapper/vg00-lvol1 is now 65536 blocks long. Size of logical volume vg00/lvol1 changed from 100.00 MiB (25 extents) to 64.00 MiB (16 extents). Logical volume vg00/lvol1 successfully resized.", "lvreduce --resizefs -L -64M vg00/lvol1", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 0 0 wz--n- 271.31G 271.31G", "lvcreate -n stripe1 -L 271.31G -i 2 vg Using default stripesize 64.00 KB Rounding up size to full physical extent 271.31 GB Logical volume \"stripe1\" created lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe1 vg -wi-a- 271.31G /dev/sda1(0),/dev/sdb1(0)", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 1 0 wz--n- 271.31G 0", "vgextend vg /dev/sdc1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 3 1 0 wz--n- 406.97G 135.66G", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required", "vgextend vg /dev/sdd1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 4 1 0 wz--n- 542.62G 271.31G lvextend vg/stripe1 -L 542G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 542.00 GB Logical volume stripe1 successfully resized", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required lvextend -i1 -l+100%FREE vg/stripe1", "lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.00g 100.00", "lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg rwi-a-r- 20.00m 100.00 lvextend -L +5G vg/lv --nosync Extending 2 mirror images. Extending logical volume lv to 5.02 GiB Logical volume lv successfully resized lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.02g 100.00", "cling_tag_list = [ \"@site1\", \"@site2\" ]", "cling_tag_list = [ \"@A\", \"@B\" ]", "pvs -a -o +pv_tags /dev/sd[bcdefgh] PV VG Fmt Attr PSize PFree PV Tags /dev/sdb1 taft lvm2 a-- 15.00g 15.00g A /dev/sdc1 taft lvm2 a-- 15.00g 15.00g B /dev/sdd1 taft lvm2 a-- 15.00g 15.00g B /dev/sde1 taft lvm2 a-- 15.00g 15.00g C /dev/sdf1 taft lvm2 a-- 15.00g 15.00g C /dev/sdg1 taft lvm2 a-- 15.00g 15.00g A /dev/sdh1 taft lvm2 a-- 15.00g 15.00g A", "lvcreate --type raid1 -m 1 -n mirror --nosync -L 10G taft WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Logical volume \"mirror\" created", "lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 10.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 10.00g /dev/sdb1(1) [mirror_rimage_1] taft iwi-aor--- 10.00g /dev/sdc1(1) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)", "lvextend --alloc cling -L +10G taft/mirror Extending 2 mirror images. Extending logical volume mirror to 20.00 GiB Logical volume mirror successfully resized", "lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 20.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdb1(1) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdg1(0) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdc1(1) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdd1(0) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)", "lvs vg/thin1s1 LV VG Attr LSize Pool Origin thin1s1 vg Vwi---tz-k 1.00t pool0 thin1", "lvchange -ay -K VG/SnapLV", "lvcreate --type thin -n SnapLV -kn -s ThinLV --thinpool VG/ThinPoolLV", "lvchange -kn VG/SnapLV", "lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,lvol3,lvol4,lvol5 lvol2 lvol1 lvol3,lvol4,lvol5 lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 lvol3,lvol2,lvol1 lvol5 lvol3,lvol2,lvol1 pool", "lvremove -f vg/lvol3 Logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool", "lvs -H -o name,full_ancestors,full_descendants,time_removed LV FAncestors FDescendants RTime lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 2016-03-14 14:14:32 +0100 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool", "lvs -H vg/-lvol3 LV VG Attr LSize -lvol3 vg ----h----- 0", "lvremove -f vg/lvol5 Automatically removing historical logical volume vg/-lvol5. Logical volume \"lvol5\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4 lvol2 lvol1 -lvol3,lvol4 -lvol3 lvol2,lvol1 lvol4 lvol4 -lvol3,lvol2,lvol1 pool", "lvremove -f vg/lvol1 vg/lvol2 Logical volume \"lvol1\" successfully removed Logical volume \"lvol2\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,-lvol3,lvol4 -lvol2 -lvol1 -lvol3,lvol4 -lvol3 -lvol2,-lvol1 lvol4 lvol4 -lvol3,-lvol2,-lvol1 pool", "lvremove -f vg/-lvol3 Historical logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,lvol4 -lvol2 -lvol1 lvol4 lvol4 -lvol2,-lvol1 pool", "lvremove -f vg/lvol4 Automatically removing historical logical volume vg/-lvol1. Automatically removing historical logical volume vg/-lvol2. Automatically removing historical logical volume vg/-lvol4. Logical volume \"lvol4\" successfully removed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/LV
Chapter 1. Introduction to Camel K
Chapter 1. Introduction to Camel K This chapter introduces the concepts, features, and cloud-native architecture provided by Red Hat Integration - Camel K: Section 1.1, "Camel K overview" Section 1.2, "Camel K features" Section 1.2.3, "Kamelets" Section 1.3, "Camel K development tooling" Section 1.4, "Camel K distributions" 1.1. Camel K overview Red Hat Integration - Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run your integration code written in Camel Domain Specific Language (DSL) directly on OpenShift. Camel K is a subproject of the Apache Camel open source community: https://github.com/apache/camel-k . Camel K is implemented in the Go programming language and uses the Kubernetes Operator SDK to automatically deploy integrations in the cloud. For example, this includes automatically creating services and routes on OpenShift. This provides much faster turnaround times when deploying and redeploying integrations in the cloud, such as a few seconds or less instead of minutes. The Camel K runtime provides significant performance optimizations. The Quarkus cloud-native Java framework is enabled by default to provide faster start up times, and lower memory and CPU footprints. When running Camel K in developer mode, you can make live updates to your integration DSL and view results instantly in the cloud on OpenShift, without waiting for your integration to redeploy. Using Camel K with OpenShift Serverless and Knative Serving, containers are created only as needed and are autoscaled under load up and down to zero. This reduces cost by removing the overhead of server provisioning and maintenance and enables you to focus on application development instead. Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies through decoupled relationships between event producers and consumers using a publish-subscribe or event-streaming model. Additional resources Apache Camel K website Getting started with OpenShift Serverless 1.2. Camel K features The Camel K includes the following main platforms and features: 1.2.1. Platform and component versions OpenShift Container Platform 4.13, 4.14 OpenShift Serverless 1.31.0 Red Hat Build of Quarkus 2.13.8.Final-redhat-00006 Red Hat Camel Extensions for Quarkus 2.13.3.redhat-00008 Apache Camel K 1.10.5.redhat-00002 Apache Camel 3.18.6.redhat-00007 OpenJDK 11 1.2.2. Camel K features Knative Serving for autoscaling and scale-to-zero Knative Eventing for event-driven architectures Performance optimizations using Quarkus runtime by default Camel integrations written in Java or YAML DSL Development tooling with Visual Studio Code Monitoring of integrations using Prometheus in OpenShift Quickstart tutorials Kamelet Catalog of connectors to external systems such as AWS, Jira, and Salesforce The following diagram shows a simplified view of the Camel K cloud-native architecture: Additional resources Apache Camel architecture 1.2.3. Kamelets Kamelets hide the complexity of connecting to external systems behind a simple interface, which contains all the information needed to instantiate them, even for users who are not familiar with Camel. Kamelets are implemented as custom resources that you can install on an OpenShift cluster and use in Camel K integrations. Kamelets are route templates that use Camel components designed to connect to external systems without requiring deep understanding of the component. Kamelets abstract the details of connecting to external systems. You can also combine Kamelets to create complex Camel integrations, just like using standard Camel components. Additional resources Integrating Applications with Kamelets 1.3. Camel K development tooling The Camel K provides development tooling extensions for Visual Studio (VS) Code, Red Hat CodeReady WorkSpaces, and Eclipse Che. The Camel-based tooling extensions include features such as automatic completion of Camel DSL code, Camel K modeline configuration, and Camel K traits. The following VS Code development tooling extensions are available: VS Code Extension Pack for Apache Camel by Red Hat Tooling for Apache Camel K extension Language Support for Apache Camel extension Debug Adapter for Apache Camel K Additional extensions for OpenShift, Java and more For details on how to set up these VS Code extensions for Camel K, see Setting up your Camel K development environment . Important The following plugin VS Code Language support for Camel - a part of the Camel extension pack provides support for content assist when editing Camel routes and application.properties . To install a supported Camel K tooling extension for VS code to create, run and operate Camel K integrations on OpenShift, see VS Code Tooling for Apache Camel K by Red Hat extension To install a supported Camel debug tool extension for VS code to debug Camel integrations written in Java, YAML or XML locally, see Debug Adapter for Apache Camel by Red Hat For details about configurations and components to use the developer tool with specific product versions, see Camel K Supported Configurations and Camel K Component Details Note: The Camel K VS Code extensions are community features. Eclipse Che also provides these features using the vscode-camelk plug-in. For more information about scope of development support, see Development Support Scope of Coverage Additional resources VS Code tooling for Apache Camel K example Eclipse Che tooling for Apache Camel K 1.4. Camel K distributions Table 1.1. Red Hat Integration - Camel K distributions Distribution Description Location Operator image Container image for the Red Hat Integration - Camel K Operator: integration/camel-k-rhel8-operator OpenShift web console under Operators OperatorHub registry.redhat.io Maven repository Maven artifacts for Red Hat Integration - Camel K Red Hat provides Maven repositories that host the content we ship with our products. These repositories are available to download from the software downloads page. For Red Hat Integration - Camel K the following repositories are required: rhi-common rhi-camel-quarkus rhi-camel-k Installation of Red Hat Integration - Camel K in a disconnected environment (offline mode) is not supported. Software downloads of Red Hat build of Apache Camel Source code Source code for Red Hat Integration - Camel K Software downloads of Red Hat build of Apache Camel Quickstarts Quick start tutorials: Basic Java integration Event streaming integration JDBC integration JMS integration Kafka integration Knative integration SaaS integration Serverless API integration Transformations integration https://github.com/openshift-integration Note You must have a subscription for Red Hat build of Apache Camel K and be logged into the Red Hat Customer Portal to access the Red Hat Integration - Camel K distributions.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/getting_started_with_camel_k/introduction-to-camel-k
F.2. File Menu
F.2. File Menu The File menu provides actions to manage your workspace resources. Figure F.2. File Menu The New > submenu provides specific actions to create various generic workspace resources as well as Teiid Designer models and VDBs. Figure F.3. File Menu The File menu contains the following actions: New > Model Project - Create user a new model project. New > Folder - Create new folder within an existing project or folder. New > Model - Create a new model of a specified model type and class using the New Model Wizards. New > Virtual Database Definition - Create a new VDB, or Virtual Database Definition. Open File - Enables you to open a file for editing - including files that do not reside in the Workspace. Close (Ctrl+W) - Closes the active editor. You are prompted to save changes before the file closes. Close All (Shift+Ctrl+W) - Closes all open editors. You are prompted to save changes before the files close. Save (Ctrl+S) - Saves the contents of the active editor. Save As - Enables you to save the contents of the active editor under another file name or location. Save All (Shift+Ctrl+S) - Saves the contents of all open editors. Move - Launches a Refactor > Move resource dialog.. Rename (F2) - Launches a Refactor > Rename resource dialog if resource selected, else inline rename is preformed. Refresh - Refreshes the resource with the contents in the file system. Convert Line Delimiters To - Alters the line delimiters for the selected files. Changes are immediate and persist until you change the delimiter again - you do not need to save the file. Print (Ctrl+P) - Prints the contents of the active editor. In the Teiid Designer, this action prints the diagram in the selected editor. Allows control over orientation (portrait or landscape), scaling, margins and page order. User can also specify a subset of the pages to print (i.e., 2 through 8). Switch Workspace - Opens the Workspace Launcher, from which you can switch to a different workspace. This restarts the Workbench. Restart - Exits and restarts the Workbench. --> Import - Launches the Import Wizard which provides several ways to construct or import models.. Export - Launches the Export Wizard which provides options for exporting model data. Properties (Alt+Enter) - Opens the Properties dialog for the currently selected resource. These will include path to the resource on the file system, date of last modification and its writable or executable state. Most Recent Files List - Contains a list of the most recently accessed files in the Workbench. You can open any of these files from the File menu by selecting the file name. Exit - Closes and exits the Workbench.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/file_menu
5.10. Starting the Cluster Software
5.10. Starting the Cluster Software After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order: service ccsd start service cman start (or service lock_gulmd start for GULM clusters) service fenced start (DLM clusters only) service clvmd start , if CLVM has been used to create clustered volumes Note Shared storage for use in Red Hat Cluster Suite requires that you be running the cluster logical volume manager daemon ( clvmd ) or the High Availability Logical Volume Management agents (HA-LVM). If you are not able to use either the clvmd daemon or HA-LVM for operational reasons or because you do not have the correct entitlements, you must not use single-instance LVM on the shared disk as this may result in data corruption. If you have any concerns please contact your Red Hat service representative. service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) Start the Red Hat Cluster Suite management GUI. At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-starting-cluster-ca
Chapter 12. Changing the role of a replica
Chapter 12. Changing the role of a replica In a replication topology, you can change the role of replicas. For example, if a supplier is unavailable due to a hardware outage, you can promote a consumer to a supplier. The other way around, you can demote, for example, a supplier with low hardware resources to a consumer and later add another supplier with new hardware. 12.1. Promoting a replica using the command line You can promote: A consumer to a hub or supplier A hub to a supplier This section describes how to promote a replica of the dc=example,dc=com suffix. Prerequisites The Directory Server instance is a member of a replication topology. The replica to promote is a consumer or hub. Procedure If the replica to promote is a hub with replication agreements, and the hub should no longer send data to other hosts after the promotion, remove the replication agreements: List the replication agreements on the hub: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-agmt list --suffix " dc=example,dc=com " dn: cn=example-agreement,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config cn: example-agreement ... The cn attribute contains the replication agreement name that you need in the step. Remove the replication agreement from the hub: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-agmt delete --suffix " dc=example,dc=com " example-agreement Promote the instance: If you promote a consumer or hub to a supplier, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication promote --suffix " dc=example,dc=com " --newrole " supplier " --replica-id 2 Important The replica ID must be a unique integer value between 1 and 65534 for a suffix across all suppliers in the topology. If you promote a consumer to a hub, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication promote --suffix " dc=example,dc=com " --newrole " hub " If the replica in its new role should send updates to other hosts in the topology, create replication agreements. Additional resources Configuring single-supplier replication using the command line Configuring multi-supplier replication using the command line Configuring cascading replication using the command line 12.2. Promoting a replica using the web console You can promote: A consumer to a hub or supplier A hub to a supplier This section describes how to promote a replica of the dc=example,dc=com suffix. Prerequisites The Directory Server instance is a member of a replication topology. The replica to promote is a consumer or hub. You are logged in to the instance in the web console. Procedure If the replica to promote is a hub with replication agreements, and the hub should no longer send data to other hosts after the promotion, remove the replication agreements: Navigate to Replication Agreements . Click Actions to the agreement you want to delete, and select Delete Agreement . Navigate to Replication Configuration , and click the Change Role button. If you promote a consumer or hub to a supplier, select Supplier , and enter a unique replica ID. Important The replica ID must be a unique integer value between 1 and 65534 for a suffix across all suppliers in the topology. If you promote a consumer to a hub, select Hub . Select Yes, I am sure . Click Change Role . If the replica in its new role should send updates to other hosts in the topology, create replication agreements. Additional resources Configuring single-supplier replication using the web console Configuring multi-supplier replication using the web console Configuring cascading replication using the web console 12.3. Demoting a replica using the command line You can demote: A supplier or hub to a consumer A hub to a consumer This section describes how to demote a replica of the dc=example,dc=com suffix. Prerequisites The Directory Server instance is a member of a replication topology. The replica to demote is a supplier or hub. Procedure If the replica to demote has replication agreements that are no longer needed, for example, because you demote the replica to a consumer, remove the replication agreements: List the replication agreements on the replica: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-agmt list --suffix " dc=example,dc=com " dn: cn=example-agreement,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config cn: example-agreement ... The cn attribute contains the replication agreement name that you need in the step. Remove the replication agreement from the replica: # dsconf -D " cn=Directory Manager " ldap://server.example.com repl-agmt delete --suffix " dc=example,dc=com " example-agreement Demote the instance: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication demote --suffix " dc=example,dc=com " --newrole " hub_or_consumer " Depending on the role you want to configure, set the --newrole parameter to hub or consumer . If you configured the replica as a hub and it should send updates to other hosts in the topology, create replication agreements. Additional resources Configuring single-supplier replication using the command line Configuring multi-supplier replication using the command line Configuring cascading replication using the command line 12.4. Demoting a replica using the web console You can demote: A supplier or hub to a consumer A hub to a consumer This section describes how to demote a replica of the dc=example,dc=com suffix. Prerequisites The Directory Server instance is a member of a replication topology. The replica to demote is a supplier or hub. You are logged in to the instance in the web console. Procedure If the replica to demote has replication agreements that are no longer needed, for example, because you demote the replica to a consumer, remove the replication agreements: Navigate to Replication Agreements . Click Actions to the agreement you want to delete, and select Delete Agreement . Navigate to Replication Configuration , and click Change Role button. Select the new replica role. Select Yes, I am sure . Click Change Role . If the replica in its new role should send updates to other hosts in the topology, create replication agreements. Additional resources Configuring single-supplier replication using the web console Configuring multi-supplier replication using the web console Configuring cascading replication using the web console
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-agmt list --suffix \" dc=example,dc=com \" dn: cn=example-agreement,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config cn: example-agreement", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-agmt delete --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication promote --suffix \" dc=example,dc=com \" --newrole \" supplier \" --replica-id 2", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication promote --suffix \" dc=example,dc=com \" --newrole \" hub \"", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-agmt list --suffix \" dc=example,dc=com \" dn: cn=example-agreement,cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config cn: example-agreement", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com repl-agmt delete --suffix \" dc=example,dc=com \" example-agreement", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication demote --suffix \" dc=example,dc=com \" --newrole \" hub_or_consumer \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_changing-the-role-of-a-replica_configuring-and-managing-replication
probe::ioblock_trace.request
probe::ioblock_trace.request Name probe::ioblock_trace.request - Fires just as a generic block I/O request is created for a bio. Synopsis ioblock_trace.request Values q request queue on which this bio was queued. size total size in bytes idx offset into the bio vector array phys_segments - number of segments in this bio after physical address coalescing is performed. vcnt bio vector count which represents number of array element (page, offset, length) which make up this I/O request bdev target block device p_start_sect points to the start sector of the partition structure of the device ino i-node number of the mapped file rw binary trace for read/write request name name of the probe point sector beginning sector for the entire bio bdev_contains points to the device object which contains the partition (when bio structure represents a partition) devname block device name flags see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported bytes_done number of bytes transferred Context The process makes block I/O request
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioblock-trace-request
Chapter 9. Developing extensions for Fuse Online integrations
Chapter 9. Developing extensions for Fuse Online integrations Fuse Online is a Red Hat Fuse feature that provides a web interface for integrating applications. Without writing code, a business expert can use Fuse Online to connect to applications and optionally operate on data between connections to different applications. If Fuse Online does not provide a feature that an integrator needs, then a developer can create an extension that defines the needed behavior. You can use Fuse Tooling to develop extensions that provide features for use in Fuse Online. An extension defines: One or more custom steps that operate on data between connections in an integration or One custom connector In Fuse Online, a connector represents a specific application to obtain data from or send data to. Each connector is a template for creating a connection to that specific application. For example, the Salesforce connector is the template for creating a connection to Salesforce. If Fuse Online does not provide a connector that the Fuse Online user needs, you can develop an extension that defines a custom connector. In Fuse Online, a data operation that happens between connections in an integration is referred to as a step . Fuse Online provides steps for operations such as filtering and mapping data. To operate on data between connections in ways that are not provided by Fuse Online built-in steps, you can develop a Fuse Online extension that defines one or more custom steps. Note You might prefer to develop an extension in the IDE of your choice. Whether you use Fuse Tooling or another IDE is entirely a matter of personal preference. Information about developing an extension in any IDE is in Integrating Applications with Fuse Online . 9.1. Overview of tasks Here is an overview of the tasks for developing a Fuse Online extension: Create a Fuse Online extension project and select Custom Connector or Custom Step as the extension type. Depending on the extension type, write the code for the extension: For a Custom Connector : Define the base Camel component, the connector icon, global connector properties, and the connector actions. For a Custom Step : Add routes, define actions, and specify any dependencies. Build a .jar file. Provide the .jar file to the Fuse Online user. The Fuse Online user uploads the .jar file to Fuse Online, which makes the custom connector or custom step(s) available for use. For information about Fuse Online and how to create integrations, see Integrating Applications with Fuse Online . 9.2. Prerequisites Before you begin, you need the following information and knowledge: A description of the required functionality for the Fuse Online custom connector or step (from the Fuse Online user). The Fuse Online version number for the extension. For a custom connector, an icon image file in PNG or SVG format. Fuse Online uses this icon when it displays the flow of an integration. If you do not provide an icon, then Fuse Online generates one when the .jar that contains the extension is uploaded. You should be familiar with: Fuse Online Spring Boot XML or Java Apache Camel routes (if you want to create a route-based step extension) JSON Maven 9.3. Creating a custom connector In Fuse Online, a custom connector consists of one or more connection configuration parameters, one or more connection actions, and optional configuration parameters for each action. Here is an overview of the tasks for developing a custom connector: Create a Fuse Online extension project and select Custom Connector as the extension type. Write the code for the extension. Define the base Camel component, the connector icon, global connector properties, and the connector actions. 9.3.1. Writing code for the custom connector After you create the Fuse Online extension project, you write the code that defines the custom connector elements based on the description of the required functionality provided to you by the Fuse Online user. The Table 9.1, "Custom connector elements" table shows how the elements of the custom connector that you create in Fuse Tooling correspond to elements in Fuse Online. Table 9.1. Custom connector elements Fuse Tooling element Fuse Online element Description Global (top-level) property Connection configuration parameter When a Fuse Online user creates a connection from this connector, the user specifies a value for this property as part of the configuration of the connection. Action Connection action In Fuse Online, for a connection created from this connector, a Fuse Online user selects one of these actions. Property defined in an action An action configuration parameter When a Fuse Online user configures the action that the connection performs, the Fuse Online user specifies a value for this property as part of the configuration of the action. To write the code that implements a custom connector for Fuse Online: Open the syndesis-extension-definition.json file in the Editor view and write the code that defines the global properties, the actions that the custom connector can perform, and each action's properties. Each global property corresponds to a connection configuration parameter in Fuse Online. Each action property corresponds to a Fuse Online connection action configuration parameter. In Fuse Online, when the user selects a custom connector, Fuse Online prompts for values for each connection configuration parameter. A custom connector can be for an application that uses the OAuth protocol. In this case, be sure to specify a global property for the OAuth client ID and another global property for the OAuth client secret. The Fuse Online user will need to specify values for these parameters for a connection created from this connector to work. Each connector action declares a base Camel component scheme. The example provided by the New Fuse Online Extension Project wizard uses the telegram Camel component scheme: If the custom connector requires additional dependencies, add them to the project's pom.xml file. The default scope for dependencies is runtime. If you add a dependency that Red Hat ships, define its scope as provided, for example: When you finish writing the code for the custom connector, build the .jar file as described in Section 9.5, "Building the Fuse Online extension JAR file" . 9.4. Creating custom steps After you create the Fuse Online extension project, you write the code that defines the custom steps based on the description of the required functionality provided to you by the Fuse Online user. Within a single extension, you can define more than one custom step and you can define each custom step with Camel routes or with Java beans. 9.4.1. Writing code for the custom step After you create the Fuse Online extension project, you write the code that defines the custom step(s)based on the description of the required functionality provided to you by the Fuse Online user. Table 9.2, "Custom step elements" shows how the elements of the custom step that you create in Fuse Tooling correspond to elements in Fuse Online. Table 9.2. Custom step elements Fuse Tooling element Fuse Online element Description Action Custom Step In Fuse Online, after the user imports the step extension, the custom step(s) appear(s) on the Choose a step page. Property defined in an action A custom step configuration parameter In Fuse Online, when the user selects a custom step, Fuse Online prompts for values for configuration parameters. To write the code that implements a custom step for Fuse Online: For a Camel route-based step, in the extension.xml file, create routes that address the purpose of the extension. The entrypoint of each route must match the entrypoint that you define in the syndesis-extension-definition.json file, as described in Step 2. For a Java bean-based step, edit the java file. In the syndesis-extension-definition.json file, write the code that defines the actions and their properties. You need a new action for each entrypoint. Each action that you create corresponds to a custom step in Fuse Online. You can use different types of code for each action. That is, you can use a Camel route for one action and a Java bean for another action. Each property corresponds to a Fuse Online step configuration parameter. In Fuse Online, when the user selects a custom step, Fuse Online prompts for values for configuration parameters. For example, a custom log step might have a level parameter that indicates how much information to send to the log. Here is the template for the .json file that contains the extension metadata, including properties that will be filled in by the user in Fuse Online after uploading the extension and adding its custom step to an integration: Note The tags are ignored in this release. They are reserved for future use. To edit the extension dependencies, open the `pom.xml `file in the editor. If you add a dependency, you must define its scope. When you finish writing the code for the custom step(s), build the .jar file as described in Section 9.5, "Building the Fuse Online extension JAR file" . 9.5. Building the Fuse Online extension JAR file To build the .jar file for the extension: In the Project Explorer view, right-click the project. From the context menu, select Run As Maven clean verify . In the Console view, you can monitor the progress of the build. When the build is complete, refresh the target folder in the Project Explorer view (select the project and then press F5 ). In the Project Explorer view, open the target folder to see the generated .jar file: The name of the .jar file follows Maven defaults: USD{artifactId}-USD{version}.jar For example: custom:step-camel-1.0.0.jar This .jar file defines the extension, its required dependencies, and its metadata: Extension Id, Name, Version, Tags, and Description. For example: 9.6. Providing the JAR file to the Fuse Online user Provide the following to the Fuse Online user: The .jar file A document that describes the extension. For a step extension, include information about data shapes that each action in the step extension requires as input or provides as output (for data mapping). In Fuse Online, the user uploads the .jar file as described in Integrating Applications with Fuse Online .
[ "{ \"schemaVersion\" : \"v1\", \"name\" : \"Example Fuse Online Extension\", \"extensionId\" : \"fuse.online.extension.example\", \"version\" : \"1.0.0\", \"actions\" : [ { \"id\" : \"io.syndesis:telegram-chat-from-action\", \"name\" : \"Chat Messages\", \"description\" : \"Receive all messages sent to the chat bot\", \"descriptor\" : { \"componentScheme\" : \"telegram\", \"inputDataShape\" : { \"kind\" : \"none\" }, \"outputDataShape\" : { \"kind\" : \"java\", \"type\" : \"org.apache.camel.component.telegram.model.IncomingMessage\" }, \"configuredProperties\" : { \"type\" : \"bots\" } }, \"actionType\" : \"connector\", \"pattern\" : \"From\" }, { \"id\" : \"io.syndesis:telegram-chat-to-action\", \"name\" : \"Send a chat Messages\", \"description\" : \"Send messages to the chat (through the bot).\", \"descriptor\" : { \"componentScheme\" : \"telegram\", \"inputDataShape\" : { \"kind\" : \"java\", \"type\" : \"java.lang.String\" }, \"outputDataShape\" : { \"kind\" : \"none\" }, \"propertyDefinitionSteps\" : [ { \"description\" : \"Chat id\", \"name\" : \"chatId\", \"properties\" : { \"chatId\" : { \"kind\" : \"parameter\", \"displayName\" : \"Chat Id\", \"type\" : \"string\", \"javaType\" : \"String\", \"description\" : \"The telegram's Chat Id, if not set will use CamelTelegramChatId from the incoming exchange.\" } } } ], \"configuredProperties\" : { \"type\" : \"bots\" } }, \"actionType\" : \"connector\", \"pattern\" : \"To\" } ], \"properties\" : { \"authorizationToken\" : { \"kind\" : \"property\", \"displayName\" : \"Authorization Token\", \"group\" : \"security\", \"label\" : \"security\", \"required\" : true, \"type\" : \"string\", \"javaType\" : \"java.lang.String\", \"secret\" : true, \"description\" : \"Telegram Bot Authorization Token\" } } }", "<dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-telegram</artifactId> <scope>provided</scope> </dependency> </dependencies>", "{ \"actions\": [ { \"actionType\": \"extension\", \"id\": \"USD{actionId}\", \"name\": \"Action Name\", \"description\": \"Action Description\", \"tags\": [ \"xml\" ], \"descriptor\": { \"kind\": \"ENDPOINT|BEAN|STEP\", \"entrypoint\": \"direct:USD{actionId}\", \"inputDataShape\": { \"kind\": \"any\" }, \"outputDataShape\": { \"kind\": \"any\" }, \"propertyDefinitionSteps\": [] } } ], \"tags\": [ \"feature\", \"experimental\" ] }", "{ \"schemaVersion\" : \"v1\", \"name\" : \"Example Fuse Online Extension\", \"description\" : \"Logs a message body with a prefix\", \"extensionId\" : \"fuse.online.extension.example\", \"version\" : \"1.0.0\", \"actions\" : [ { \"id\" : \"Log-body\", \"name\" : \"Log Body\", \"description\" : \"A simple xml Body Log with a prefix\", \"descriptor\" : { \"kind\" : \"ENDPOINT\", \"entrypoint\" : \"direct:log-xml\", \"resource\" : \"classpath:META-INF/syndesis/extensions/log-body-action.xml\", \"inputDataShape\" : { \"kind\" : \"any\" }, \"outputDataShape\" : { \"kind\" : \"any\" }, \"propertyDefinitionSteps\" : [ { \"description\" : \"Define your Log message\", \"name\" : \"Log Body\", \"properties\" : { \"prefix\" : { \"componentProperty\" : false, \"deprecated\" : false, \"description\" : \"The Log body prefix message\", \"displayName\" : \"Log Prefix\", \"javaType\" : \"String\", \"kind\" : \"parameter\", \"required\" : false, \"secret\" : false, \"type\" : \"string\" } } } ] }, \"tags\" : [ \"xml\" ], \"actionType\" : \"step\" } ], \"dependencies\" : [ { \"type\" : \"MAVEN\", \"id\" : \"io.syndesis.extension:extension-api:jar:1.3.0.fuse-000014\" } ], \"extensionType\" : \"Steps\" }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/fuseonlineextension
7.121. man-pages-fr
7.121. man-pages-fr 7.121.1. RHBA-2015:0667 - man-pages-fr bug fix update An updated man-pages-fr package that fixes one bug is now available for Red Hat Enterprise Linux 6. The man-pages-fr package contains a collection of manual pages translated into French. Bug Fix BZ# 1135541 The French version of the "du" man page does not contain an up-to-date list of "du" options and their descriptions. Because the man page is no longer maintained, this update adds a message at the top of the page stating that the documentation is outdated, and that users can find the latest version in the English man page. Users of man-pages-fr are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-man-pages-fr
25.22. Configuring Maximum Time for Error Recovery with eh_deadline
25.22. Configuring Maximum Time for Error Recovery with eh_deadline Important In most scenarios, you do not need to enable the eh_deadline parameter. Using the eh_deadline parameter can be useful in certain specific scenarios, for example if a link loss occurs between a Fibre Channel switch and a target port, and the Host Bus Adapter (HBA) does not receive Registered State Change Notifications (RSCNs). In such a case, I/O requests and error recovery commands all time out rather than encounter an error. Setting eh_deadline in this environment puts an upper limit on the recovery time, which enables the failed I/O to be retried on another available path by multipath. However, if RSCNs are enabled, the HBA does not register the link becoming unavailable, or both, the eh_deadline functionality provides no additional benefit, as the I/O and error recovery commands fail immediately, which allows multipath to retry. The SCSI host object eh_deadline parameter enables you to configure the maximum amount of time that the SCSI error handling mechanism attempts to perform error recovery before stopping and resetting the entire HBA. The value of the eh_deadline is specified in seconds. The default setting is off , which disables the time limit and allows all of the error recovery to take place. In addition to using sysfs , a default value can be set for all SCSI HBAs by using the scsi_mod.eh_deadline kernel parameter. Note that when eh_deadline expires, the HBA is reset, which affects all target paths on that HBA, not only the failing one. As a consequence, I/O errors can occur if some of the redundant paths are not available for other reasons. Enable eh_deadline only if you have a fully redundant multipath configuration on all targets.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/sec-configuring-maximum-time-for-error-recovery
Chapter 8. Virtual machine templates
Chapter 8. Virtual machine templates 8.1. Creating virtual machine templates 8.1.1. About virtual machine templates Preconfigured Red Hat virtual machine templates are listed in the Templates tab within the Virtualization page. These templates are available for different versions of Red Hat Enterprise Linux, Fedora, Microsoft Windows 10, and Microsoft Windows Servers. Each Red Hat virtual machine template is preconfigured with the operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). The Templates tab displays four types of virtual machine templates: Red Hat Supported templates are fully supported by Red Hat. User Supported templates are Red Hat Supported templates that were cloned and created by users. Red Hat Provided templates have limited support from Red Hat. User Provided templates are Red Hat Provided templates that were cloned and created by users. Note In the Templates tab, you cannot edit or delete Red Hat Supported or Red Hat Provided templates. You can only edit or delete custom virtual machine templates that were created by users. Using a Red Hat template is convenient because the template is already preconfigured. When you select a Red Hat template to create your own custom template, the Create Virtual Machine Template wizard prompts you to add a boot source if a boot source was not added previously. Then, you can either save your custom template or continue to customize it and save it. You can also select the Create Virtual Machine Template wizard directly and create a custom virtual machine template. The wizard prompts you to provide configuration details for the operating system, flavor, workload type, and other settings. You can add a boot source and continue to customize your template and save it. 8.1.2. Adding a boot source for a virtual machine template A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Available in the Templates tab. After you add a boot source to a template, you can create a new virtual machine from the template. There are four methods for selecting and adding a boot source in the web console: Upload local file (creates PVC) Import via URL (creates PVC) Clone existing PVC (creates PVC) Import via Registry (creates PVC) Prerequisites To add a boot source, you must be logged in as a user with the os-images.kubevirt.io:edit RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. To upload a local file, the operating system image file must exist on your local machine. To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images. To clone an existing PVC, access to the project with a PVC is required. To import via registry, access to the container registry is required. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Identify the virtual machine template for which you want to configure a boot source and click Add source . In the Add boot source to template window, click Select boot source , select a method for creating a persistent volume claim (PVC): Upload local file , Import via URL , Clone existing PVC , or Import via Registry . Optional: Click This is a CD-ROM boot source to mount a CD-ROM and use it to install the operating system on to an empty disk. The additional empty disk is automatically created and mounted by OpenShift Virtualization. If the additional disk is not needed, you can remove it when you create the virtual machine. Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required. Optional: Enter a name for Source provider to associate the name with this template. Advanced: Click Storage class and select the storage class that is used to create the disk. Advanced: Click Access mode and select an access mode for the persistent volume. Supported access modes are: Single User (RWO) , Shared Access (RWX) , and Read Only (ROX) . Advanced: Click Volume mode if you want to select Block instead of the default value Filesystem . Select the appropriate method to save your boot source: Click Save and upload if you uploaded a local file. Click Save and import if you imported content from a URL or the registry. Click Save and clone if you cloned an existing PVC. Your custom virtual machine template with a boot source is listed in the Templates tab, and you can create virtual machines by using this template. 8.1.2.1. Virtual machine template fields for adding a boot source The following table describes the fields for Add boot source to template window. This window displays when you click Add Source for a virtual machine template in the Templates tab. Name Parameter Description Boot source type Upload local file (creates PVC) Upload a file from your local device. Supported file types include gz, xz, tar, and qcow2. Import via URL (creates PVC) Import content from an image available from an HTTP or HTTPS endpoint. Obtain the download link URL from the web page where the image download is available and enter that URL link in the Import via URL (creates PVC) field. Example: For a Red Hat Enterprise Linux image, log on to the Red Hat Customer Portal, access the image download page, and copy the download link URL for the KVM guest image. Clone existing PVC (creates PVC) Use a PVC that is already available in the cluster and clone it. Import via Registry (creates PVC) Specify the bootable operating system container that is located in a registry and accessible from the cluster. Example: kubevirt/cirros-registry-dis-demo. Source provider Optional field. Add descriptive text about the source for the template or the name of the user who created the template. Example: Red Hat. Advanced Storage class The storage class that is used to create the disk. Access mode Access mode of the persistent volume. Supported access modes are Single User (RWO) , Shared Access (RWX) , Read Only (ROX) . If Single User (RWO) is selected, the disk can be mounted as read/write by a single node. If Shared Access (RWX) is selected, the disk can be mounted as read-write by many nodes. The kubevirt-storage-class-defaults config map provides access mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. + Note Shared Access (RWX) is required for some features, such as live migration of virtual machines between nodes. Volume mode Defines whether the persistent volume uses a formatted file system or raw block state. Supported modes are Block and Filesystem . The kubevirt-storage-class-defaults config map provides volume mode defaults for data volumes. The default value is set according to the best option for each storage class in the cluster. 8.1.3. Marking virtual machine templates as favorites For easier access to virtual machine templates that are used frequently, you can mark those templates as favorites. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Identify the Red Hat template that you want to mark as a favorite. Click the Options menu and select Favorite template . The template moves up higher in the list of displayed templates. 8.1.4. Filtering the list of virtual machine templates by providers In the Templates tab, you can use the Search by name field to search for virtual machine templates by specifying either the name of the template or a label that identfies the template. You can also filter templates by the provider, and display only those templates that meet your filtering criteria. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. To filter templates, click Filter . Select the appropriate checkbox from the list to filter the templates: Red Hat Supported , User Supported , Red Hat Provided , and User Provided . 8.1.5. Creating a virtual machine template with the wizard in the web console The web console features the Create Virtual Machine Template wizard that guides you through the General , Networking , Storage , Advanced , and Review steps to simplify the process of creating virtual machine templates. All required fields are marked with a *. The Create Virtual Machine Template wizard prevents you from moving to the step until you provide values in the required fields. Note The wizard guides you to create a custom virtual machine template where you specify the operating system, boot source, flavor, and other settings. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Click Create and select Template with Wizard . Fill in all required fields in the General step. Click to progress to the Networking step. A NIC that is named nic0 is attached by default. Optional: Click Add Network Interface to create additional NICs. Optional: You can remove any or all NICs by clicking the Options menu and selecting Delete . Virtual machines created from a template do not need a NIC attached. NICs can be created after a virtual machine has been created. Click to progress to the Storage step. Click Add Disk to add a disk, and complete your selections for the fields in the Add Disk screen. Note If Import via URL (creates PVC) , Import via Registry (creates PVC) , or Container (ephemeral) is selected as Source , a rootdisk disk is created and attached to the virtual machine as the Bootable Disk . A Bootable Disk is not required for virtual machines provisioned from a PXE source if there are no disks attached to the virtual machine. If one or more disks are attached to the virtual machine, you must select one as the Bootable Disk . Blank disks, PVC disks without a valid boot source, and the cloudinitdisk cannot be used as a boot source. Optional: Click Advanced to configure Cloud-init. Click Review to review and confirm your settings. Click Create Virtual Machine template . Click See virtual machine template details to view details about the virtual machine template. The template is also listed in the Templates tab. 8.1.6. Virtual machine template wizard fields The following tables describe the fields for the General , Networking , Storage , and Advanced steps in the Create Virtual Machine Template wizard. 8.1.6.1. Virtual machine template wizard fields Name Parameter Description Template Template from which to create the virtual machine. Selecting a template will automatically complete other fields. Name The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), and hyphens ( - ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods ( . ), or special characters. Template provider The name of the user who is creating the template for the cluster or any meaningful name that identifies this template. Template support No additional support This template does not have additional support in the cluster. Support by template provider This template is supported by the template provider. Description Optional description field. Operating System The primary operating system that is selected for the virtual machine. Selecting an operating system automatically selects the default Flavor and Workload Type for that operating system. Boot Source Import via URL (creates PVC) Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. Clone existing PVC (creates PVC) Select an existent persistent volume claim available on the cluster and clone it. Import via Registry (creates PVC) Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo . PXE (network boot - adds network interface) Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. Persistent Volume Claim project Project name that you want to use for cloning the PVC. Persistent Volume Claim name PVC name that should apply to this virtual machine template if you are cloning an existing PVC. Mount this as a CD-ROM boot source A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. Flavor Tiny, Small, Medium, Large, Custom Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system. Workload Type Desktop A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. Server Balances performance and it is compatible with a wide range of server workloads. High-Performance A virtual machine configuration that is optimized for high-performance workloads. 8.1.6.2. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace. MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.1.6.3. Storage fields Name Selection Description Source Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced Volume Mode Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem . Advanced Access Mode Access mode of the persistent volume. Supported access modes are Single User (RWO) , Shared Access (RWX) , and Read Only (ROX) . Advanced storage settings The following advanced storage settings are available for Blank , Import via URL , and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Name Parameter Description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode Single User (RWO) The disk can be mounted as read/write by a single node. Shared Access (RWX) The disk can be mounted as read/write by many nodes. Note This is required for some features, such as live migration of virtual machines between nodes. Read Only (ROX) The disk can be mounted as read-only by many nodes. 8.1.6.4. Cloud-init fields Name Description Hostname Sets a specific hostname for the virtual machine. Authorized SSH Keys The user's public key that is copied to ~/.ssh/authorized_keys on the virtual machine. Custom script Replaces other options with a field in which you paste a custom cloud-init script. 8.1.7. Additional Resources See Configuring the SR-IOV Network Operator for further details on the SR-IOV Network Operator. 8.2. Editing virtual machine templates You can update a virtual machine template in the web console, either by editing the full configuration in the YAML editor or by selecting a custom template in the Templates tab and modifying the editable items. 8.2.1. Editing a virtual machine template in the web console Edit select values of a virtual machine template in the web console by clicking the pencil icon to the relevant field. Other values can be edited using the CLI. Labels and annotations are editable for both preconfigured Red Hat templates and your custom virtual machine templates. All other values are editable only for custom virtual machine templates that users have created using the Red Hat templates or the Create Virtual Machine Template wizard. Procedure Click Workloads Virtualization from the side menu. Click the Templates tab. Select a virtual machine template. Click the VM Template Details tab. Click the pencil icon to make a field editable. Make the relevant changes and click Save . Editing a virtual machine template will not affect virtual machines already created from that template. 8.2.2. Editing virtual machine template YAML configuration in the web console You can edit the YAML configuration of a virtual machine template from the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be modified. Note Navigating away from the YAML screen while editing cancels any changes to the configuration that you made. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Select a template to open the VM Template Details screen. Click the YAML tab to display the editable configuration. Edit the file and click Save . A confirmation message, which includes the updated version number for the object, shows that the YAML configuration was successfully edited. 8.2.3. Adding a virtual disk to a virtual machine template Use this procedure to add a virtual disk to a virtual machine template. Procedure Click Workloads Virtualization from the side menu. Click the Templates tab. Select a virtual machine template to open the VM Template Details screen. Click the Disks tab. In the Add Disk window, specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . 8.2.4. Adding a network interface to a virtual machine template Use this procedure to add a network interface to a virtual machine template. Procedure Click Workloads Virtualization from the side menu. Click the Templates tab. Select a virtual machine template to open the VM Template Details screen. Click the Network Interfaces tab. Click Add Network Interface . In the Add Network Interface window, specify the Name , Model , Network , Type , and MAC Address of the network interface. Click Add . 8.2.5. Editing CD-ROMs for Templates Use the following procedure to edit CD-ROMs for virtual machine templates. Procedure Click Workloads Virtualization from the side menu. Click the Templates tab. Select a virtual machine template to open the VM Template Details screen. Click the Disks tab. Click the Options menu for the CD-ROM that you want to edit and select Edit . In the Edit CD-ROM window, edit the fields: Source , Persistent Volume Claim , Name , Type , and Interface . Click Save . 8.3. Enabling dedicated resources for virtual machine templates Virtual machines can have resources of a node, such as CPU, dedicated to them to improve performance. 8.3.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 8.3.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. 8.3.3. Enabling dedicated resources for a virtual machine template You can enable dedicated resources for a virtual machine template in the Details tab. Virtual machines that were created by using a Red Hat template or the wizard can be enabled with dedicated resources. Procedure Click Workloads Virtual Machine Templates from the side menu. Select a virtual machine template to open the Virtual Machine Template tab. Click the Details tab. Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window. Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 8.4. Deleting a virtual machine template Red Hat virtual machine templates cannot be deleted. You can use the web console to delete: Virtual machine templates created from Red Hat templates Custom virtual machine templates that were created by using the Create Virtual Machine Template wizard. 8.4.1. Deleting a virtual machine template in the web console Deleting a virtual machine template permanently removes it from the cluster. Note You can delete virtual machine templates that were created by using a Red Hat template or the Create Virtual Machine Template wizard. Preconfigured virtual machine templates that are provided by Red Hat cannot be deleted. Procedure In the OpenShift Virtualization console, click Workloads Virtualization from the side menu. Click the Templates tab. Select the appropriate method to delete a virtual machine template: Click the Options menu of the template to delete and select Delete Template . Click the template name to open the Virtual Machine Template Details screen and click Actions Delete Template . In the confirmation pop-up window, click Delete to permanently delete the template.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/openshift_virtualization/virtual-machine-templates
Chapter 3. RHSA-2022:4711-06 Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.0] security update
Chapter 3. RHSA-2022:4711-06 Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.0] security update The bugs in this chapter are addressed by advisory RHSA-2022:4711-06. Further information about this advisory is available at https://errata.devel.redhat.com/advisory/84555 3.1. distribution BZ# 2065052 Red Hat Virtualization 4.4 SP1 now requires ansible-core >= 2.12.0 to execute Ansible playbooks/roles internally from RHV components. BZ# 2072637 python3-daemon/python38-daemon are runtime dependencies for several Red Hat Virtualization Manager components. They need to be provided in the RHEL-8-RHEV-S-4.4 channel BZ# 2072639 ansible-runner-2.1.3-1.el8ev is a runtime dependency for the Red Hat Virtualization Manager. It needs to be provided in the RHEL-8-RHEV-S-4.4 channel BZ# 2072641 python3-docutils/python38-docutils are runtime dependencies for several Red Hat Virtualization Manager components. They need to be provided in the RHEL-8-RHEV-S-4.4 channel BZ# 2072642 python3-lockfile/python38-lockfile are runtime dependencies for several Red Hat Virtualization Manager components. They need to be provided in the RHEL-8-RHEV-S-4.4 channel BZ# 2072645 python3-pexpect/python38-pexpect are runtime dependencies for several Red Hat Virtualization Manager components. They need to be provided in the RHEL-8-RHEV-S-4.4 channel BZ# 2072646 ansible-core-2.12 requires all libraries used in Ansible modules/roles/playbooks to be built with Python 3.8. The python38-ptyprocess needs to be built and distributed in Red Hat Virtualization channels. BZ# 1608675 Red Hat Virtualization is compliant with USGv6 Revision 1 standards since version 4.4.6 of RHV. For more information, see https://www.iol.unh.edu/registry/usgv6?name=red+hat . 3.2. ovirt-engine BZ# 977379 With this release, it is now possible to edit and manage iSCSI storage domain connections using the Administration Portal. Users can now edit the logical domain to point to a different physical storage, which is useful if the underlying LUNs are replicated for backup purposes, or if the physical storage address has changed. BZ# 977778 In this release, support has been added for the conversion of a disk's format and allocation policy. This can help reduce space usage and improve performance, as well as enabling incremental backup on existing raw disks. BZ# 2015796 Red Hat Virtualization Manager 4.4 SP1 is now capable of running on a host with the RHEL 8.6 DISA STIG OpenSCAP profile applied. BZ# 2023250 The Advanced Virtualization module (virt:av) has been merged into the standard RHEL virtualization module (virt:rhel) as part of the RHEL 8.6 release. Due to this change, the host deploy and host upgrade flows have been updated to properly enable the virt:rhel module during new installation of the RHEL 8.6 host and during upgrade of an existing RHEL 8.5 or earlier host to a RHEL 8.6 host. BZ# 2030596 The Red Hat Virtualization Manager is now capable of running on machine with the PCI-DSS security profile. BZ# 2035051 Red Hat Virtualization 4.4 SP1 uses the updated DISA STIG OpenSCAP profile from RHEL 8.6, which does not remove the gssproxy package.As a result, the Red Hat Virtualization host works correctly after applying the DISA STIG profile. BZ# 2052690 Red Hat Virtualization 4.4 SP1 now requires ansible-core >= 2.12.0 to execute Ansible playbooks/roles internally from RHV components. BZ# 2055136 With this release, the virt DNF module version is correctly set according to the RHEL version of the host during the host upgrade flow. BZ# 2056021 Previously, renewing of the libvirt-vnc certificate was omitted during the Enroll Certificate flow. With the release of RHV 4.4 SP1 and libvirt-vnc certificates are renewed during the Enroll Certificate flow. BZ# 2056126 With this release, the Red Hat Virtualization Manager 4.4 SP1 certificate expiration check will warn of upcoming certificate expiration earlier: 1. If a certificate is about to expire in the upcoming 120 days, a WARNING event is raised in the audit log. 2. If a certificate is about to expire in the upcoming 30 days, an ALERT event is raised in the audit log. This checks for internal RHV certificates (for example certificate for RHVM <-> hypervisor communication), but it doesn't check for custom certificates configured for HTTPS access to RHVM as configured according to Replacing the Manager CA Certificate . BZ# 2071468 If SSH soft fencing needs to be executed on a problematic host, The Red Hat Virtualization Manager now waits the expected time interval before it continues with fencing. As a result,the VDSM has enough time to start and respond to the Red Hat Virtualization Manager. BZ# 655153 Previously, no confirmation dialog was shown for the suspend VM operation. A virtual machine was suspended right after clicking the suspend-VM button. With this release, a confirmation dialog is presented by default when pressing the suspend-VM button. The user can choose not to show this confirmation dialog again. The setting can be reverted in the user preferences dialog. BZ# 1878930 Feature: Provide warning event if number of available MAC addresses in pool are below threshold. The threshold is configurable via engine-config. An event will be created per pool on engine start, and if the threshold is reached when consuming addresses from the pool. Reason: Make it easier for the admin user to plan ahead. Result: Admin will not be faced with an empty pool when creating VNICs on VMs. BZ# 1926625 With this release, you can now enable HTTP Strict Transport Security following Red Hat Virtualization Manager installation by following the instructions in this KCS article: https://access.redhat.com/solutions/1220063 BZ# 1998255 Feature: Search box in VNIC profiles main page Reason: Requested by customer Result: It is now possible to search and filter the VNIC profiles by values of their attributes in the main VNIC profiles page. BZ# 1999698 In versions, engine-setup configured apache httpd's SSLProtocol configuration option to be -all +TLSv1.2 . In RHEL 8, this isn't needed, because this option is managed by crypto-policies. With this version, engine-setup does not set this option, and removes it if it's already set, letting it be managed by crypto-policies. BZ# 2000031 Previously, host non-reponding treatment coould be called multiple times simultaneously. In this release, multiple calls to non-reponding treatment are prevented, and the host comes up much faster. BZ# 2006745 Previously, when trying to copy a template disk from/to a Managed Block Storage domain, the operation failed due to an incorrect storage domain ID, saving the same image repeatedly in the images (and base disks) DB tables, and casting the disk to DiskImage when it is of type ManagedBlockStorageDisk. In this release, all of the above issues were fixed, and copying a template disk from/to a Managed Block Storage domain works as expected. BZ# 2007384 Previously high values of disk writeRate/readRate were not processed properly by the ovirt-engine. In this release, the type of writeRate/readRate in ovirt-engine has changed from integer to long to support values that are higher than integers. BZ# 2040361 Previously, when hot plugging multiple disks with VIRTIO SCSI interface to virtual machine that are defined with more than one IO thread, this would have failed due to allocation of a duplicate PCI address. Now, each disk is assigned with a unique PCI address in this process, which enabled to plug multiple disks with VIRTIO SCSI to virtual machines also when they are set with more than one IO thread. BZ# 2043146 Previously, renewing of the libvirt-vnc certificate was omitted during the Enroll Certificate flow. With the release of RHV 4.4 SP1 and libvirt-vnc certificates are renewed during the Enroll Certificate flow. BZ# 1624015 Feature: Setting the default console type (for both new and existing VMs) can be done engine widely by using CLI for setting the following engine-config parameters: engine-config -s ClientModeVncDefault=NoVnc to prefer NoVnc instead of remote-viewer and engine-config -s ClientModeConsoleDefault=vnc to prefer VNC over SPICE in case the VM has both available. If the actual console type for existed VMs was chosen manually via 'console options' dialog, cleaning the browser local storage is needed. So in caseit's required to set console type globally for all existing VMs, please clear the browser local storage after running the engine. Reason: An option for setting default console type for all provisioned VMs globally at once was not supported up till now. Needed to go one VM by one and set the console type via the 'console options' dialog. Result: Support setting console type globally for all VMs, existed and new ones, by using the engine-config parameters. BZ# 1648985 A user with SuperUser role can connect to a virtual machine in a VM-pool without having the VM assigned. Previously, this did not prevent other users from taking that VM, which resulted in closing the connected console and assigning the VM to a user with UserRole instead. In this release, users cannot take VMs that other users are connected to via a console. This prevents users with UserRole permissions from hijacking a VM that a user with SuperUser role is connected to. BZ# 1687845 Previously, displaying notifications for hosts activated from maintenance mode was done when the actual job activation "end time" was after the last displayed notification. But if there was a time difference between server and the browser, the job "end time" could be in the future. In this release, notifications rely only on the server time, and the job's "end time" is no longer compared to local browser time.As a result, only one "Finish activating host" notification appears. BZ# 1745141 With this release, SnowRidge Accelerator Interface Architecture (AIA) can be enabled by modifying the extra_cpu_flags custom property of a virtual machine (movdiri, movdir64b). BZ# 1782056 With this release, IPSec for the OVN feature is available on hosts with configured ovirt-provider-ovn, OVN version 2021 or later and OvS version 2.15 or later. BZ# 1849169 Feature: A new parameter was added to the evenly_distributed scheduling policy that takes into account the ratio between virtual and physical CPUs on the host. Reason: To prevent the host from over utilization of all physical CPUs. Result: When the ratio is set to 0, the evenly distributed policy works as before. If the value is greater than 0, the vCPU to physical CPU is considered as follows: a. when scheduling a VM, hosts with lower CPU utilization are preferred. However, if adding of the VM would cause the vCPU to physical ratio to be exceeded, the hosts vCPU to physical ratio AND cpu utilization are considered. b. in a running environment, if the host's vCPU to physical ratio is above the limit, some of the VMs might be load balanced to the hosts with lower vCPU to physical CPU ratio. BZ# 1922977 With this release, shared disks are now a part of the 'OVF_STORE' configuration. This allows virtual machines to share disks, move a Storage Domain to another environment, and after importing VMs, the VMs correctly share the same disks without any additional manual configuration. BZ# 1927985 With this release, Padding between files has been added for exporting a virtual machine to an Open Virtual Appliance (OVA). The goal is to align disks in the OVA to the edge of a block of the underlying filesystem. As a result,disks are written faster during export, especially with an NFS partition. BZ# 1944290 Previously, when trying to log in to Red Hat Virtualization VM Portal or Administration Portal with an expired password, the URL to change the password was not shown properly. In this release, when there is an expired password error, the following clickable link appears beneath the error message: "Click here to change the password". This link will redirect the user to the change password page: "... /ovirt-engine/sso/credentials-change.html". BZ# 1944834 This release adds a user specified delay to the 'Shutdown' Console Disconnect Action of a Virtual Machine. The shutdown will occur after the user specified delay interval, or will be cancelled if the user reconnects to the VM console. This prevents a user's session loss after an accidental disconnect. BZ# 1959186 Previously, there was no way to set a quota different from that of the template from the VM portal. Thus, if the user had no access to the quota on the template, the user could not provision VMs from the template using the VM portal. In this release, the Red Hat Virtualization Manager selects a quota that the user has access to, and not necessarily from the template, when provisioning VMs from templates using the VM portal. BZ# 1964208 With this release, a screenshot API has been added that captures the current screen of a VM, and then returns a PPM file screenshot. The user can download the screenshot and view its content. BZ# 1971622 Previously, when displaying the Host's Virtual Machines sub-tab, all virtual machines were marked with a warning sign. In this release, the warning sign is displayed correctly in the same way as on the Virtual Machines list page. BZ# 1974741 Previously, a bug in the finalization mechanism left the disk locked in the database. In this release, the finalization mechanism works correctly, and the disk remains unlocked in all scenarios. BZ# 1979441 Previously there was a warning that indicates the VM CPU is different than the cluster CPU for high performance virtual machines. With this release, the warning is not shown when CPU passthrough is configured, and as a result, not presented for high performance virtual machines. BZ# 1979797 In this release, a new warning message displays in the removing storage domain window if the selected domain has leases for entities that were raised on a different storage domain. BZ# 1986726 When importing VM from OVA and setting the allocation policy to Preallocated, the disks were imported as Thin provisioned. In this release, the selected allocation policy is followed. BZ# 1987121 The vGPU editing dialog was enhanced with an option to set driver parameters. The driver parameters are are specified as an arbitrary text, which is passed to NVidia drivers as it is, e.g. "enable_uvm=1". The given text will be used for all the vGPUs of a given VM. The vGPU editing dialog was moved from the host devices tab to the VM devices tab. vGPU properties are no longer specified using mdev_type VM custom property. They are specified as VM devices now. This change is transparent when using the vGPU editing dialog. In the REST API, the vGPU properties can be manipulated using a newly introduced ... /vms/... /mediateddevices endpoint. The new API permits setting "nodisplay" and driver parameters for each of the vGPUs individually, but note that this is not supported in the vGPU editing dialog where they can be set only to a single value common for all the vGPUs of a given VM. BZ# 1988496 Previously, the vmconsole-proxy-helper certificate was not renewed when needed. With this release, the certificate is renewed each time following the CA certificate update. BZ# 2002283 With this release, it is now possible to set the number of PCI Express ports for virtual machines by setting the NumOfPciExpressPorts configuration using engine-config. BZ# 2003996 Previously, snapshots that represent VM -run configuration were reported by ovirt-ansible but their typewas missing and they could not be removed. In this release, snapshots that represent VM -run configuration are not reported to clients, including ovirt-ansible. BZ# 2021217 Add Windows 2022 as a guest operating system BZ# 2023786 When a VM is set with the custom property sap_agent=true, it requires vhostmd hooks to be installed on the host to work correctly. Previously, if the hooks were missing, there was no warning to the user. In this release, when the required hooks are not installed and reported by the host, the host is filtered out by the scheduler when starting the VM. BZ# 2040474 The Administration Portal cluster upgrade interface has been improved to provide better error messaging and status and progress indications. BZ# 2041544 Previously,when selecting a host to upload in the Administration Portal (Storage > Domain > select domain > Disks > Upload), trying to select a host different from the first one on the list resulted in jumping back to the first host on the list. In this release, the storage domain and data center are only initialized once, and the list of hosts doesn't need to be reloaded. As a result, a different host can be selected without being set back to the first one on the list. BZ# 2052557 Previously, vGPU devices were not released when stateless VMs or VMs that were started in run-once mode were shut down. This sometimes caused the system to forbid running the VMs again, although the vGPU devices were available. IN this release, vGPU devices are properly released when stateless VMs or VMs that were started in run-once mode are shut down. BZ# 2066084 Previously, the vmconsole-proxy-user and vmconsole-proxy-host certificates were not renewed when needed. With this release, the certificates are now renewed when executing engine-setup. 3.3. ovirt-engine-dwh BZ# 2014888 Dashboard field descriptions have been updated to match the real meanings of I/O operations data fields. BZ# 2010903 Database columns and dashboard field descriptions have been updated to match the real meanings in I/O operations data fields. 3.4. ovirt-engine-metrics BZ# 1990462 In this release, Elasticsearch username and password have been added for authentication from rsyslog. AS a result, rsyslog can now authenticate to Elasticsearch using a username and password. BZ# 2059521 Red Hat Virtualization 4.4 SP1 now requires ansible-core >= 2.12.0 to execute Ansible playbooks/roles internally from RHV components. 3.5. ovirt-engine-ui-extensions BZ# 2024202 Previously, the formatting of parameters passed to translated messages on ui-extensions dialogs (not just in the Red Hat Virtualization dashboard) was handled in 2 different layers: code and translations. That caused invalid formatting for a number of language. In this release, the formatting of translated messages parameters on ui-extensions is done only on one layer, the translation layer (formatting done on code layer is removed). As a result, translation strings on ui-extensions dialogs are now displayed properly for all languages. 3.6. ovirt-log-collector BZ# 2040402 The log_days option of the sos logs plugin has been removed. As a result, the command that used this option began to fail. In this release, the use of the option has been removed, and the program now functions as expected. BZ# 2048546 Previously, using the sosreport command in the log collector utility produced a warning. In this release, the utility was modified to use the sos report command instead of the sosreport command. As a result, the warning is no longer displayed. and the utility will continue to work even when the sosreport is deprecated in the future. BZ# 2050566 Rebase package(s) to version: 4.4.5 Highlights, important fixes, or notable enhancements: 3.7. ovirt-web-ui BZ# 1667517 With this release, new console options, including set screen mode have been added to the VM Portal UI. The following console options can now be set in the VM Portal (under Account Settings > Console options): - default console type to use (Spice, VNC, noVNC, RDP for Windows), - full screen mode (on/off) per console type, - smartcard enabled/disabled - Ctrl+Alt+Del mapping - SSH key These console options settings are now persistent on the engine server, so deleting cookies and website data won't reset those settings. Limitations for these settings: 1. Console settings via VM Portal are global for all VMs and cannot be set per VM (as opposed to the Administration Portal, where console options are set per VM). 2. There is no sync between Administration Portal console options and VM Portal console options - The console options configuration done by Create/Edit VM/Pool dialog (supported console types and smartcard enabled) are synced, but the 'console options' run time settings done for running VMs via Console Console options are not synced with Administration Portal. 3. Console settings are part of Account settings and therefore are set per user. Each user logged in to the VM Portal can have their own console settings, defaults are taken from the vdc_options config parameters. BZ# 1781241 With this release, support for automatically connecting to a Virtual Machine has been restored as a configurable option. This is enabled in the Account Settings > Console tab. This feature enables the user to connect automatically to a running Virtual Machine every time the user logs in to the VM Portal. - Each user can choose a VM to auto connect to from a list on a global level, in the Account Settings > Console tab. - Only if the chosen VM exists and is running, the auto connect will be enforced time the user logs in. - The Console type for connecting will be chosen based on Account Settings > Console options. - This auto connect VM setting is persisted per user on the engine. BZ# 1991240 Previously, there was no way to set a quota different from that of the template from the VM portal. Thus, if the user had no access to the quota on the template, the user could not provision VMs from the template using the VM portal. In this release, the Red Hat Virtualization Manager selects a quota that the user has access to, and not necessarily from the template, when provisioning VMs from templates using the VM portal. 3.8. rhv-log-collector-analyzer BZ# 2010203 Previously, newlines included as part of the data were not handled properly, and as a result, the formatting of the table was wrong. In this release, the table format now is correct, even if the data contains newlines. BZ# 2013928 Previously if the data from the DB included special characters in the fields related to the vdc_options, i.e. the same ones that have special meaning in the ADOC format, they were used as is. This resulted in an incorrectly formatted HTML document. In this release, The code was modified to escape replacing some of the characters, and modified the code in a way that no longer translates some of the characters. AS a result, the information now properly presented, even if the DB fields contain special characters. BZ# 2051857 Rebase package(s) to version: 1.0.13 Highlights, important fixes, or notable enhancements: BZ# 2037121 rhv-image-discrepancies tools now shows Data Center and Storage Domain names in the output. rhvm-branding-rhv BZ# 2054756 With this release, a link to the Migration Toolkit for Virtualization documentation has been added to the welcome page of the Red Hat Virtualization Manager. 3.9. rhvm-setup-plugins BZ# 2050614 Rebase package(s) to version: 4.5.0 Highlights, important fixes, or notable enhancements: 3.10. vdsm BZ# 2075352 The following changes have been made to the way certificates are generated: Internal CA is issued for 20 years. Internal certificates are valid for 5 years. Internal HTTPS certificates (apache, websocket proxy) are valid for 398 days. CA is renewed 60 days before expiration. Certificates are renewed 365 days before expiration(CertExpirationWarnPeriodInDays configurable via engine-config). CertExpirationAlertPeriodInDays (defaulting to 30) is now also configurable by engine-config. Note that engine certificates and CA are checked/renewed only during engine-setup. Certificates on hosts are renewed/checked during host upgrade or a manual Enroll certificates action. 3.11. vulnerability BZ# 1964461 A flaw was found in normalize-url. Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data. BZ# 1995793 A flaw was found in nodejs-trim-off-newlines. All versions of package trim-off-newlines are vulnerable to Regular Expression Denial of Service (ReDoS) via string processing. The highest threat from this vulnerability is to system availability. BZ# 2007557 A regular expression denial of service (ReDoS) vulnerability was found in nodejs-ansi-regex. This could possibly cause an application using ansi-regex to use an excessive amount of CPU time when matching crafted ANSI escape codes.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_notes/rhsa_2022_4711_06_moderate_rhv_manager_ovirt_engine_ovirt_4_5_0_security_update
4.6. Removing the Cluster Configuration
4.6. Removing the Cluster Configuration To remove all cluster configuration files and stop all cluster services, thus permanently destroying a cluster, use the following command. Warning This command permanently removes any cluster configuration that has been created. It is recommended that you run pcs cluster stop before destroying the cluster.
[ "pcs cluster destroy" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-clusterremove-haar
Chapter 6. Using the survey editor
Chapter 6. Using the survey editor When ordering a product in automation services catalog, users may be required to provide additional information to complete the request. This prompt is created by a survey associated with the product. Initially created and edited in Ansible Tower, Catalog Administrators and portfolio users with update permissions can edit the survey in automation services catalog. Once submitted and validated, the surveys pass user-submitted extra variables to the job or workflow template run on Ansible Tower on the execution of an order. See Surveys in the Ansible Tower User Guide to learn more creating and editing surveys for job templates. Using the survey properties editor, you can: Set labels, helper text or placeholders to enhance the user experience of users providing information Further restrict existing validation parameters. Change validation messages. Hide, disable or set to read-only chosen fields in the survey. 6.1. Survey properties editor Values for surveys are defined in Ansible Tower and the automation services catalog properties editor allows Catalog Administrators to restrict survey options, such as adjusting default values and removing items from drop-down menus. Additionally, use the properties editor to hide, disable, or set a field to read only. Note Provide an initial value for any required field set to Hidden or Disabled. This includes read only fields. 6.2. Editing surveys You can edit fields in the surveys from the product detail view. Edit each field individually using the Properties editor. Prerequisites The product has an associated survey, created in Ansible Tower. Procedure Click Products . Locate the product and click on its title. Click and select Edit survey . Select a field in the survey. The Properties editor will appear. Click Properties to view the elements to edit for that field. Select Validation to configure the validator types used to validate user input. Click Save when finished.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/using_the_survey_editor
Chapter 8. AWS Local Zone or Wavelength Zone tasks
Chapter 8. AWS Local Zone or Wavelength Zone tasks After installing OpenShift Container Platform on Amazon Web Services (AWS), you can further configure AWS Local Zones or Wavelength Zones and an edge compute pool. 8.1. Extend existing clusters to use AWS Local Zones or Wavelength Zones As a post-installation task, you can extend an existing OpenShift Container Platform cluster on Amazon Web Services (AWS) to use AWS Local Zones or Wavelength Zones. Extending nodes to Local Zones or Wavelength Zones locations comprises the following steps: Adjusting the cluster-network maximum transmission unit (MTU). Opting in the Local Zones or Wavelength Zones group to AWS Local Zones or Wavelength Zones. Creating a subnet in the existing VPC for a Local Zones or Wavelength Zones location. Important Before you extend an existing OpenShift Container Platform cluster on AWS to use Local Zones or Wavelength Zones, check that the existing VPC contains available Classless Inter-Domain Routing (CIDR) blocks. These blocks are needed for creating the subnets. Creating the machine set manifest, and then creating a node in each Local Zone or Wavelength Zone location. Local Zones only: Adding the permission ec2:ModifyAvailabilityZoneGroup to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example: Example of an additional IAM policy for AWS Local Zones deployments { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } Wavelength Zone only: Adding the permissions ec2:ModifyAvailabilityZoneGroup , ec2:CreateCarrierGateway , and ec2:DeleteCarrierGateway to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example: Example of an additional IAM policy for AWS Wavelength Zones deployments { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } Additional resources For more information about AWS Local Zones, the supported instances types, and services, see AWS Local Zones features in the AWS documentation. For more information about AWS Local Zones, the supported instances types, and services, see AWS Wavelength features in the AWS documentation. 8.1.1. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones or Wavelength Zones locations. When deploying a cluster that uses Local Zones or Wavelength Zones, consider the following points: Amazon EC2 instances in the Local Zones or Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones or Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones or Wavelength Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones or Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. You can access the following resources to learn more about a respective zone type: See How Local Zones work in the AWS documentation. See How AWS Wavelength work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones or Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones or Wavelength Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones or Wavelength Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones or Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones or Wavelength Zones nodes. The new labels are: node-role.kubernetes.io/edge='' Local Zones only: machine.openshift.io/zone-type=local-zone Wavelength Zones only: machine.openshift.io/zone-type=wavelength-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones or Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification. 8.2. Changing the cluster network MTU to support Local Zones or Wavelength Zones You might need to change the maximum transmission unit (MTU) value for the cluster network so that your cluster infrastructure can support Local Zones or Wavelength Zones subnets. 8.2.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure. Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance. 8.2.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 8.2.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is 100 bytes. For OpenShift SDN, the overhead is 50 bytes. If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 8.2.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 8.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. The overhead for OVN-Kubernetes is 100 bytes and for OpenShift SDN is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 8.2.1.4. Changing the cluster network MTU As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . For OpenShift SDN, this value must be 50 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Finalize the MTU migration for your plugin. In both example commands, <mtu> specifies the new cluster network MTU that you specified with <overlay_to> . To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification Verify that the node in your cluster uses the MTU that you specified by entering the following command: USD oc describe network.config cluster 8.2.2. Opting in to AWS Local Zones or Wavelength Zones If you plan to create subnets in AWS Local Zones or Wavelength Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Example command for listing available AWS Wavelength Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones or Wavelength Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones or Wavelength Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones or Wavelength Zones where you want to create subnets. 8.2.3. Create network requirements in an existing VPC that uses AWS Local Zones or Wavelength Zones If you want a Machine API to create an Amazon EC2 instance in a remote zone location, you must create a subnet in a Local Zones or Wavelength Zones location. You can use any provisioning tool, such as Ansible or Terraform, to create subnets in the existing Virtual Private Cloud (VPC). You can configure the CloudFormation template to meet your requirements. The following subsections include steps that use CloudFormation templates to create the network requirements that extend an existing VPC to use an AWS Local Zones or Wavelength Zones. Extending nodes to Local Zones requires that you create the following resources: 2 VPC Subnets: public and private. The public subnet associates to the public route table for the regular Availability Zones in the Region. The private subnet associates to the provided route table ID. Extending nodes to Wavelength Zones requires that you create the following resources: 1 VPC Carrier Gateway associated to the provided VPC ID. 1 VPC Route Table for Wavelength Zones with a default route entry to VPC Carrier Gateway. 2 VPC Subnets: public and private. The public subnet associates to the public route table for an AWS Wavelength Zone. The private subnet associates to the provided route table ID. Important Considering the limitation of NAT Gateways in Wavelength Zones, the provided CloudFormation templates support only associating the private subnets with the provided route table ID. A route table ID is attached to a valid NAT Gateway in the AWS Region. 8.2.4. Wavelength Zones only: Creating a VPC carrier gateway To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes. To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components: A carrier gateway that associates to the existing VPC. A carrier route table that lists route entries. A subnet that associates to the carrier route table. Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone. The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location: Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network. Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic. Authorizes inbound traffic from a carrier network that is located in a specific location. Authorizes outbound traffic to a carrier network and the internet. Note No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway. You can use the provided CloudFormation template to create a stack of the following AWS resources: One carrier gateway that associates to the VPC ID in the template. One public route table for the Wavelength Zone named as <ClusterName>-public-carrier . Default IPv4 route entry in the new route table that targets the carrier gateway. VPC gateway endpoint for an AWS Simple Storage Service (S3). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . Procedure Go to the section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \// ParameterKey=VpcId,ParameterValue="USD{VpcId}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{ClusterName}" 4 1 <stack_name> is the name for the CloudFormation stack, such as clusterName-vpc-carrier-gw . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 <VpcId> is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS". 4 <ClusterName> is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in the metadata.name section of the install-config.yaml configuration file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f Verification Confirm that the CloudFormation template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster. PublicRouteTableId The ID of the Route Table in the Carrier infrastructure. 8.2.5. Wavelength Zones only: CloudFormation template for the VPC Carrier Gateway You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure. Example 8.1. CloudFormation template for VPC Carrier Gateway AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 8.2.6. Creating subnets for AWS edge compute services Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create a subnet in Local Zones or Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones or Wavelength Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> for Local Zones and cluster-wl-<wavelength_zone_shortname> for Wavelength Zones. You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 5 USD{ZONE_NAME} is the value of Local Zones or Wavelength Zones name to create the subnets. 6 USD{ROUTE_TABLE_PUB} is the Public Route Table Id extracted from the CloudFormation template. For Local Zones, the public route table is extracted from the VPC CloudFormation Stack. For Wavelength Zones, the value must be extracted from the output of the VPC's carrier gateway CloudFormation stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters: PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. 8.2.7. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones or Wavelength Zones infrastructure. Example 8.2. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 8.2.8. Creating a machine set manifest for an AWS Local Zones or Wavelength Zones node After you create subnets in AWS Local Zones or Wavelength Zones, you can create a machine set manifest. The installation program sets the following labels for the edge machine pools at cluster installation time: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_ZoneGroup> machine.openshift.io/zone-type: <value_of_ZoneType> The following procedure details how you can create a machine set configuraton that matches the edge compute pool configuration. Prerequisites You have created subnets in AWS Local Zones or Wavelength Zones. Procedure Manually preserve edge machine pool labels when creating the machine set manifest by gathering the AWS API. To complete this action, enter the following command in your command-line interface (CLI): USD aws ec2 describe-availability-zones --region <value_of_Region> \ 1 --query 'AvailabilityZones[].{ ZoneName: ZoneName, ParentZoneName: ParentZoneName, GroupName: GroupName, ZoneType: ZoneType}' \ --filters Name=zone-name,Values=<value_of_ZoneName> \ 2 --all-availability-zones 1 For <value_of_Region> , specify the name of the region for the zone. 2 For <value_of_ZoneName> , specify the name of the Local Zones or Wavelength Zones. Example output for Local Zone us-east-1-nyc-1a [ { "ZoneName": "us-east-1-nyc-1a", "ParentZoneName": "us-east-1f", "GroupName": "us-east-1-nyc-1", "ZoneType": "local-zone" } ] Example output for Wavelength Zone us-east-1-wl1 [ { "ZoneName": "us-east-1-wl1-bos-wlz-1", "ParentZoneName": "us-east-1a", "GroupName": "us-east-1-wl1", "ZoneType": "wavelength-zone" } ] 8.2.8.1. Sample YAML for a compute machine set custom resource on AWS This sample YAML defines a compute machine set that runs in the us-east-1-nyc-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/edge: "" . Note If you want to reference the sample YAML file in the context of Wavelength Zones, ensure that you replace the AWS Region and zone information with supported Wavelength Zone values. In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <edge> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-edge-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: edge 3 machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> spec: metadata: labels: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_GroupName> machine.openshift.io/zone-type: <value_of_ZoneType> node-role.kubernetes.io/edge: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: id: <value_of_PublicSubnetIds> 7 publicIp: true tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/edge effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, edge role node label, and zone name. 3 Specify the edge role node label. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1-nyc-1a . 6 Specify the region, for example, us-east-1 . 7 The ID of the public subnet that you created in AWS Local Zones or Wavelength Zones. You created this public subnet ID when you finished the procedure for "Creating a subnet in an AWS zone". 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 9 Specify a taint to prevent user workloads from being scheduled on edge nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.8.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-edge-us-east-1-nyc-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. Optional: To check nodes that were created by the edge machine, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f Additional resources Installing a cluster on AWS with compute nodes on AWS Local Zones Installing a cluster on AWS with compute nodes on AWS Wavelength Zones 8.3. Creating user workloads in AWS Local Zones or Wavelength Zones After you create an Amazon Web Service (AWS) Local Zones or Wavelength Zones infrastructure and deploy your cluster, you can use edge compute nodes to create user workloads in Local Zones or Wavelength Zones subnets. When you use the installation program to create a cluster, the installation program automatically specifies a taint effect of NoSchedule to each edge compute node. This means that a scheduler does not add a new pod, or deployment, to a node if the pod does not match the specified tolerations for a taint. You can modify the taint for better control over how nodes create workloads in each Local Zones or Wavelength Zones subnet. The installation program creates the compute machine set manifests file with node-role.kubernetes.io/edge and node-role.kubernetes.io/worker labels applied to each edge compute node that is located in a Local Zones or Wavelength Zones subnet. Note The examples in the procedure are for a Local Zones infrastructure. If you are working with a Wavelength Zones infrastructure, ensure you adapt the examples to what is supported in this infrastructure. Prerequisites You have access to the OpenShift CLI ( oc ). You deployed your cluster in a Virtual Private Cloud (VPC) with defined Local Zones or Wavelength Zones subnets. You ensured that the compute machine set for the edge compute nodes on Local Zones or Wavelength Zones subnets specifies the taints for node-role.kubernetes.io/edge . Procedure Create a deployment resource YAML file for an example application to be deployed in the edge compute node that operates in a Local Zones subnet. Ensure that you specify the correct tolerations that match the taints for the edge compute node. Example of a configured deployment resource for an edge compute node that operates in a Local Zone subnet kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: "node-role.kubernetes.io/edge" operator: "Equal" value: "" effect: "NoSchedule" containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: "/mnt/storage" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name> 1 storageClassName : For the Local Zone configuration, you must specify gp2-csi . 2 kind : Defines the deployment resource. 3 name : Specifies the name of your Local Zone application. For example, local-zone-demo-app-nyc-1 . 4 namespace: Defines the namespace for the AWS Local Zone where you want to run the user workload. For example: local-zone-app-nyc-1a . 5 zone-group : Defines the group to where a zone belongs. For example, us-east-1-iah-1 . 6 nodeSelector : Targets edge compute nodes that match the specified labels. 7 tolerations : Sets the values that match with the taints defined on the MachineSet manifest for the Local Zone node. Create a service resource YAML file for the node. This resource exposes a pod from a targeted edge compute node to services that run inside your Local Zone network. Example of a configured service resource for an edge compute node that operates in a Local Zone subnet apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application> 1 kind : Defines the service resource. 2 selector: Specifies the label type applied to managed pods. Additional resources Installing a cluster on AWS with compute nodes on AWS Local Zones Installing a cluster on AWS with compute nodes on AWS Wavelength Zones Understanding taints and tolerations 8.4. steps Optional: Use the AWS Load Balancer (ALB) Operator to expose a pod from a targeted edge compute node to services that run inside of a Local Zones or Wavelength Zones subnet from a public network. See Installing the AWS Load Balancer Operator .
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "oc describe network.config cluster", "Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'", "oc get machineconfigpools", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/mtu-migration.sh", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'", "oc get machineconfigpools", "oc describe network.config cluster", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "aws ec2 describe-availability-zones --region <value_of_Region> \\ 1 --query 'AvailabilityZones[].{ ZoneName: ZoneName, ParentZoneName: ParentZoneName, GroupName: GroupName, ZoneType: ZoneType}' --filters Name=zone-name,Values=<value_of_ZoneName> \\ 2 --all-availability-zones", "[ { \"ZoneName\": \"us-east-1-nyc-1a\", \"ParentZoneName\": \"us-east-1f\", \"GroupName\": \"us-east-1-nyc-1\", \"ZoneType\": \"local-zone\" } ]", "[ { \"ZoneName\": \"us-east-1-wl1-bos-wlz-1\", \"ParentZoneName\": \"us-east-1a\", \"GroupName\": \"us-east-1-wl1\", \"ZoneType\": \"wavelength-zone\" } ]", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-edge-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: edge 3 machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> spec: metadata: labels: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_GroupName> machine.openshift.io/zone-type: <value_of_ZoneType> node-role.kubernetes.io/edge: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: id: <value_of_PublicSubnetIds> 7 publicIp: true tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/edge effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-edge-us-east-1-nyc-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f", "kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: \"node-role.kubernetes.io/edge\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name>", "apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/aws-compute-edge-zone-tasks
Chapter 11. Configuring the cluster network range
Chapter 11. Configuring the cluster network range As a cluster administrator, you can expand the cluster network range after cluster installation. You might want to expand the cluster network range if you need more IP addresses for additional nodes. For example, if you deployed a cluster and specified 10.128.0.0/19 as the cluster network range and a host prefix of 23 , you are limited to 16 nodes. You can expand that to 510 nodes by changing the CIDR mask on a cluster to /14 . When expanding the cluster network address range, your cluster must use the OVN-Kubernetes network plugin . Other network plugins are not supported. The following limitations apply when modifying the cluster network IP address range: The CIDR mask size specified must always be smaller than the currently configured CIDR mask size, because you can only increase IP space by adding more nodes to an installed cluster The host prefix cannot be modified Pods that are configured with an overridden default gateway must be recreated after the cluster network expands 11.1. Expanding the cluster network IP address range You can expand the IP address range for the cluster network. Because this change requires rolling out a new Operator configuration across the cluster, it can take up to 30 minutes to take effect. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To obtain the cluster network range and host prefix for your cluster, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/22","hostPrefix":23}] To expand the cluster network IP address range, enter the following command. Use the CIDR IP address range and host prefix returned from the output of the command. USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"<network>/<cidr>","hostPrefix":<prefix>} ], "networkType": "OVNKubernetes" } }' where: <network> Specifies the network part of the cidr field that you obtained from the step. You cannot change this value. <cidr> Specifies the network prefix length. For example, 14 . Change this value to a smaller number than the value from the output in the step to expand the cluster network range. <prefix> Specifies the current host prefix for your cluster. This value must be the same value for the hostPrefix field that you obtained from the step. Example command USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"10.217.0.0/14","hostPrefix": 23} ], "networkType": "OVNKubernetes" } }' Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take up to 30 minutes for this change to take effect. USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/14","hostPrefix":23}] 11.2. Additional resources Red Hat OpenShift Network Calculator About the OVN-Kubernetes network plugin
[ "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"", "[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'", "network.config.openshift.io/cluster patched", "oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"", "[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/configuring-cluster-network-range
5.296. seabios
5.296. seabios 5.296.1. RHBA-2012:0802 - seabios bug fix and enhancement update Updated seabios packages that fix several bugs and add multiple enhancements are now available for Red Hat Enterprise Linux 6. The seabios package contains a legacy BIOS implementation, which can be used as a coreboot payload. Bug Fixes BZ# 757999 Previously, SeaBIOS sometimes booted from an incorrect drive. This happened because the QEMU hard-drive priority was lower than the virtio block-device priority. With this update, the QEMU hard-drive priority has been raised above the virtio block-device priority and SeaBIOS now boots from the correct drive. BZ# 771946 Previously, a guest could remain unresponsive during boot after the S3 (Suspend to RAM) state as SeaBIOS failed to advertise to the guest's operating system that the device was powered down. With this update, the underlying code handling the block device resume has been fixed and the problem no longer occurs. BZ# 786142 Previously, a Windows guest could detect an HPET (High Precision Event Timer) device although the guest had the HPET device disabled. This occurred because the HPET device was defined in the DSDT (Differentiated System Description Table). This update removes the definition from the table and the problem no longer occurs. BZ# 801293 Booting from some USB flash drives could fail because SeaBIOS did not support recovery from USB STALL conditions. This update adds support for recovery from STALLs. BZ# 804933 RTC (Real-Time Clock) wake-up for Windows guest did not work. With this update, the underlying code of FADT (Fixed ACPI Description Table) has been fixed to match QEMU behavior and the problem no longer occurs. BZ# 808033 Previously, if a device was hot plugged while the guest was still processing a hot-plug event, the new hot-plug event failed to be processed and the device was not detected. With this update, SeaBIOS uses a different event to handle hotplugging and the problem no longer occurs. BZ# 810471 Guest booting could fail if the guest had more than 62 sockets and multiple virtio disk devices. This happened because, BIOS ran out of memory and failed to initialize the boot disk. With this update, new memory is allocated under these circumstances and booting succeeds. Enhancements BZ# 809797 The in-guest S4 (Suspend-to-Disk) and S3 (Suspend-to-RAM) power management features were added as a Technology Preview. The features provide the ability to perform suspend-to-disk and suspend-to-RAM functions on the guest. To enable the feature, users have to choose the /usr/share/seabios/bios-pm.bin file for VM BIOS instead of the default /usr/share/seabios/bios.bin file through libvirt. BZ# 782028 SeaBIOS now supports booting from virtio-scsi devices. More information about Red Hat Technology Previews is available here: https://access.redhat.com/support/offerings/techpreview/ All seabios users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/seabios
probe::signal.send.return
probe::signal.send.return Name probe::signal.send.return - Signal being sent to a process completed Synopsis Values retstr The return value to either __group_send_sig_info, specific_send_sig_info, or send_sigqueue send2queue Indicates whether the sent signal was sent to an existing sigqueue name The name of the function used to send out the signal shared Indicates whether the sent signal is shared by the thread group. Context The signal's sender. (correct?) Description Possible __group_send_sig_info and specific_send_sig_info return values are as follows; 0 -- The signal is sucessfully sent to a process, which means that (1) the signal was ignored by the receiving process, (2) this is a non-RT signal and the system already has one queued, and (3) the signal was successfully added to the sigqueue of the receiving process. -EAGAIN -- The sigqueue of the receiving process is overflowing, the signal was RT, and the signal was sent by a user using something other than kill . Possible send_group_sigqueue and send_sigqueue return values are as follows; 0 -- The signal was either sucessfully added into the sigqueue of the receiving process, or a SI_TIMER entry is already queued (in which case, the overrun count will be simply incremented). 1 -- The signal was ignored by the receiving process. -1 -- (send_sigqueue only) The task was marked exiting, allowing * posix_timer_event to redirect it to the group leader.
[ "signal.send.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-send-return
22.6. JBoss Operations Network Remote-Client Server Plugin
22.6. JBoss Operations Network Remote-Client Server Plugin 22.6.1. JBoss Operations Network Plugin Metrics Table 22.1. JBoss Operations Network Traits for the Cache Container (Cache Manager) Trait Name Display Name Description cache-manager-status Cache Container Status The current runtime status of a cache container. cluster-name Cluster Name The name of the cluster. members Cluster Members The names of the members of the cluster. coordinator-address Coordinator Address The coordinator node's address. local-address Local Address The local node's address. version Version The cache manager version. defined-cache-names Defined Cache Names The caches that have been defined for this manager. Table 22.2. JBoss Operations Network Metrics for the Cache Container (Cache Manager) Metric Name Display Name Description cluster-size Cluster Size How many members are in the cluster. defined-cache-count Defined Cache Count How many caches that have been defined for this manager. running-cache-count Running Cache Count How many caches are running under this manager. created-cache-count Created Cache Count How many caches have actually been created under this manager. Table 22.3. JBoss Operations Network Traits for the Cache Trait Name Display Name Description cache-status Cache Status The current runtime status of a cache. cache-name Cache Name The current name of the cache. version Version The cache version. Table 22.4. JBoss Operations Network Metrics for the Cache Metric Name Display Name Description cache-status Cache Status The current runtime status of a cache. number-of-locks-available [LockManager] Number of locks available The number of exclusive locks that are currently available. concurrency-level [LockManager] Concurrency level The LockManager's configured concurrency level. average-read-time [Statistics] Average read time Average number of milliseconds required for a read operation on the cache to complete. hit-ratio [Statistics] Hit ratio The result (in percentage) when the number of hits (successful attempts) is divided by the total number of attempts. elapsed-time [Statistics] Seconds since cache started The number of seconds since the cache started. read-write-ratio [Statistics] Read/write ratio The read/write ratio (in percentage) for the cache. average-write-time [Statistics] Average write time Average number of milliseconds a write operation on a cache requires to complete. hits [Statistics] Number of cache hits Number of cache hits. evictions [Statistics] Number of cache evictions Number of cache eviction operations. remove-misses [Statistics] Number of cache removal misses Number of cache removals where the key was not found. time-since-reset [Statistics] Seconds since cache statistics were reset Number of seconds since the last cache statistics reset. number-of-entries [Statistics] Number of current cache entries Number of entries currently in the cache. stores [Statistics] Number of cache puts Number of cache put operations remove-hits [Statistics] Number of cache removal hits Number of cache removal operation hits. misses [Statistics] Number of cache misses Number of cache misses. success-ratio [RpcManager] Successful replication ratio Successful replications as a ratio of total replications in numeric double format. replication-count [RpcManager] Number of successful replications Number of successful replications replication-failures [RpcManager] Number of failed replications Number of failed replications average-replication-time [RpcManager] Average time spent in the transport layer The average time (in milliseconds) spent in the transport layer. commits [Transactions] Commits Number of transaction commits performed since the last reset. prepares [Transactions] Prepares Number of transaction prepares performed since the last reset. rollbacks [Transactions] Rollbacks Number of transaction rollbacks performed since the last reset. invalidations [Invalidation] Number of invalidations Number of invalidations. passivations [Passivation] Number of cache passivations Number of passivation events. activations [Activations] Number of cache entries activated Number of activation events. cache-loader-loads [Activation] Number of cache store loads Number of entries loaded from the cache store. cache-loader-misses [Activation] Number of cache store misses Number of entries that did not exist in the cache store. cache-loader-stores [CacheStore] Number of cache store stores Number of entries stored in the cache stores. Note Gathering of some of these statistics is disabled by default. JBoss Operations Network Metrics for Connectors The metrics provided by the JBoss Operations Network (JON) plugin for Red Hat JBoss Data Grid are for REST and Hot Rod endpoints only. For the REST protocol, the data must be taken from the Web subsystem metrics. For details about each of these endpoints, see the Getting Started Guide . Table 22.5. JBoss Operations Network Metrics for the Connectors Metric Name Display Name Description bytesRead Bytes Read Number of bytes read. bytesWritten Bytes Written Number of bytes written. Note Gathering of these statistics is disabled by default. Report a bug 22.6.2. JBoss Operations Network Plugin Operations Table 22.6. JBoss ON Plugin Operations for the Cache Operation Name Description Start Cache Starts the cache. Stop Cache Stops the cache. Clear Cache Clears the cache contents. Reset Statistics Resets statistics gathered by the cache. Reset Activation Statistics Resets activation statistics gathered by the cache. Reset Invalidation Statistics Resets invalidations statistics gathered by the cache. Reset Passivation Statistics Resets passivation statistics gathered by the cache. Reset Rpc Statistics Resets replication statistics gathered by the cache. Remove Cache Removes the given cache from the cache-container. Record Known Global Keyset Records the global known keyset to a well-known key for retrieval by the upgrade process. Synchronize Data Synchronizes data from the old cluster to this using the specified migrator. Disconnect Source Disconnects the target cluster from the source cluster according to the specified migrator. JBoss Operations Network Plugin Operations for the Cache Backups The cache backups used for these operations are configured using cross-datacenter replication. In the JBoss Operations Network (JON) User Interface, each cache backup is the child of a cache. For more information about cross-datacenter replication, see Chapter 31, Set Up Cross-Datacenter Replication Table 22.7. JBoss Operations Network Plugin Operations for the Cache Backups Operation Name Description status Display the site status. bring-site-online Brings the site online. take-site-offline Takes the site offline. Cache (Transactions) Red Hat JBoss Data Grid does not support using Transactions in Remote Client-Server mode. As a result, none of the endpoints can use transactions. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 22.6.3. JBoss Operations Network Plugin Attributes Table 22.8. JBoss ON Plugin Attributes for the Cache (Transport) Attribute Name Type Description cluster string The name of the group communication cluster. executor string The executor used for the transport. lock-timeout long The timeout period for locks on the transport. The default value is 240000 . machine string A machine identifier for the transport. rack string A rack identifier for the transport. site string A site identifier for the transport. stack string The JGroups stack used for the transport. Report a bug 22.6.4. Create a New Cache Using JBoss Operations Network (JON) Use the following steps to create a new cache using JBoss Operations Network (JON) for Remote Client-Server mode. Procedure 22.3. Creating a new cache in Remote Client-Server mode Log into the JBoss Operations Network Console. From from the JBoss Operations Network console, click Inventory . Select Servers from the Resources list on the left of the console. Select the specific Red Hat JBoss Data Grid server from the servers list. Below the server name, click infinispan and then Cache Containers . Select the desired cache container that will be parent for the newly created cache. Right-click the selected cache container. For example, clustered . In the context menu, navigate to Create Child and select Cache . Create a new cache in the resource create wizard. Enter the new cache name and click . Set the cache attributes in the Deployment Options and click Finish . Note Refresh the view of caches in order to see newly added resource. It may take several minutes for the Resource to show up in the Inventory. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-JBoss_Operations_Network_Remote-Client_Server_Plugin
Chapter 1. Multi-cell overcloud deployments
Chapter 1. Multi-cell overcloud deployments You can use cells to divide Compute nodes in large deployments into groups, each with a message queue and dedicated database that contains instance information. By default, director installs the overcloud with a single cell for all Compute nodes. This cell contains all the Compute services and databases, and all the instances and instance metadata. For larger deployments, you can deploy the overcloud with multiple cells to accommodate a larger number of Compute nodes. You can add cells to your environment when you install a new overcloud or at any time afterwards. In multi-cell deployments, each cell runs standalone copies of the cell-specific Compute services and databases, and stores instance metadata only for instances in that cell. Global information and cell mappings are stored in the global Controller cell, which provides security and recovery in case one of the cells fails. Caution If you add cells to an existing overcloud, the conductor in the default cell also performs the role of the super conductor. This has a negative effect on conductor communication with the cells in the deployment, and on the performance of the overcloud. Also, if you take the default cell offline, you take the super conductor offline as well, which stops the entire overcloud deployment. Therefore, to scale an existing overcloud, do not add any Compute nodes to the default cell. Instead, add Compute nodes to the new cells you create, allowing the default cell to act as the super conductor. To create a multi-cell overcloud, you must perform the following tasks: Configure and deploy your overcloud to handle multiple cells. Create and provision the new cells that you require within your deployment. Add Compute nodes to each cell. Add each Compute cell to an availability zone. 1.1. Prerequisites You have deployed a basic overcloud with the required number of Controller nodes. 1.2. Global components and services The following components are deployed in a Controller cell once for each overcloud, regardless of the number of Compute cells. Compute API Provides the external REST API to users. Compute scheduler Determines on which Compute node to assign the instances. Placement service Monitors and allocates Compute resources to the instances. API database Used by the Compute API and the Compute scheduler services to track location information about instances, and provides a temporary location for instances that are built but not scheduled. In multi-cell deployments, this database also contains cell mappings that specify the database connection for each cell. cell0 database Dedicated database for information about instances that failed to be scheduled. Super conductor This service exists only in multi-cell deployments to coordinate between the global services and each Compute cell. This service also sends failed instance information to the cell0 database. 1.3. Cell-specific components and services The following components are deployed in each Compute cell. Cell database Contains most of the information about instances. Used by the global API, the conductor, and the Compute services. Conductor Coordinates database queries and long-running tasks from the global services, and insulates Compute nodes from direct database access. Message queue Messaging service used by all services to communicate with each other within the cell and with the global services. 1.4. Cell deployments architecture The default overcloud that director installs has a single cell for all Compute nodes. You can scale your overcloud by adding more cells, as illustrated by the following architecture diagrams. Single-cell deployment architecture The following diagram shows an example of the basic structure and interaction in a default single-cell overcloud. In this deployment, all services are configured to use a single conductor to communicate between the Compute API and the Compute nodes, and a single database stores all live instance data. In smaller deployments this configuration might be sufficient, but if any global API service or database fails, the entire Compute deployment cannot send or receive information, regardless of high availability configurations. Multi-cell deployment architecture The following diagram shows an example of the basic structure and interaction in a custom multi-cell overcloud. In this deployment, the Compute nodes are divided to multiple cells, each with their own conductor, database, and message queue. The global services use the super conductor to communicate with each cell, and the global database contains only information required for the whole overcloud. The cell-level services cannot access global services directly. This isolation provides additional security and fail-safe capabilities in case of cell failure. Important Do not run any Compute services on the first cell, which is named "default". Instead, deploy each new cell containing the Compute nodes separately. 1.5. Considerations for multi-cell deployments Maximum number of Compute nodes in a multi-cell deployment The maximum number of Compute nodes is 500 across all cells. Cross-cell instance migrations Migrating an instance from a host in one cell to a host in another cell is not supported. This limitation affects the following operations: cold migration live migration unshelve resize evacuation Service quotas Compute service quotas are calculated dynamically at each resource consumption point, instead of statically in the database. In multi-cell deployments, unreachable cells cannot provide usage information in real-time, which might cause the quotas to be exceeded when the cell is reachable again. You can use the Placement service and API database to configure the quota calculation to withstand failed or unreachable cells. API database The Compute API database is always global for all cells and cannot be duplicated for each cell. Console proxies You must configure console proxies for each cell, because console token authorizations are stored in cell databases. Each console proxy server needs to access the database.connection information of the corresponding cell database. Compute metadata API If you use the same network for all the cells in your multiple cell environment, you must run the Compute metadata API globally so that it can bridge between the cells. When the Compute metadata API is run globally it needs access to the api_database.connection information. If you deploy a multiple cell environment with routed networks, you must run the Compute metadata API separately in each cell to improve performance and data isolation. When the Compute metadata API runs in each cell, the neutron-metadata-agent service must point to the corresponding nova-api-metadata service. You use the parameter NovaLocalMetadataPerCell to control where the Compute metadata API runs.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/scaling_deployments_with_compute_cells/assembly_multi-cell-overcloud-deployments_cellsv2
Chapter 1. Extension APIs
Chapter 1. Extension APIs 1.1. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 1.2. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object 1.3. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 1.4. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/extension_apis/extension-apis
Chapter 5. Preparing PXE assets for OpenShift Container Platform
Chapter 5. Preparing PXE assets for OpenShift Container Platform Use the following procedures to create the assets needed to PXE boot an OpenShift Container Platform cluster using the Agent-based Installer. The assets you create in these procedures will deploy a single-node OpenShift Container Platform installation. You can use these procedures as a basis and modify configurations according to your requirements. See Installing an OpenShift Container Platform cluster with the Agent-based Installer to learn about more configurations available with the Agent-based Installer. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. 5.2. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 5.3. Creating the preferred configuration inputs Use this procedure to create the preferred configuration inputs used to create the PXE files. Procedure Install nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Note This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. Create the install-config.yaml file by running the following command: USD cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF 1 Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 Note When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Optional: Configures the network interface of a host in NMState format. Optional: To create an iPXE script, add the bootArtifactsBaseURL to the agent-config.yaml file: apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL> Where <asset_server_URL> is the URL of the server you will upload the PXE assets to. Additional resources Deploying with dual-stack networking . Configuring the install-config yaml file . See Configuring a three-node cluster to deploy three-node clusters in bare metal environments. About root device hints . NMState state examples . Optional: Creating additional manifest files 5.4. Creating the PXE assets Use the following procedure to create the assets and optional script to implement in your PXE infrastructure. Procedure Create the PXE assets by running the following command: USD openshift-install agent create pxe-files The generated PXE assets and optional iPXE script can be found in the boot-artifacts directory. Example filesystem with PXE assets and optional iPXE script boot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuz Important The contents of the boot-artifacts directory vary depending on the specified architecture. Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Upload the PXE assets and optional script to your infrastructure where they will be accessible during the boot process. Note If you generated an iPXE script, the location of the assets must match the bootArtifactsBaseURL you added to the agent-config.yaml file. 5.5. Manually adding IBM Z agents After creating the PXE assets, you can add IBM Z(R) agents. Only use this procedure for IBM Z(R) clusters. Depending on your IBM Z(R) environment, you can choose from the following options: Adding IBM Z(R) agents with z/VM Adding IBM Z(R) agents with RHEL KVM Adding IBM Z(R) agents with Logical Partition (LPAR) Note Currently, ISO boot support on IBM Z(R) ( s390x ) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported. 5.5.1. Adding IBM Z agents with z/VM Use the following procedure to manually add IBM Z(R) agents with z/VM. Only use this procedure for IBM Z(R) clusters with z/VM. Procedure Create a parameter file for the z/VM guest: Example parameter file rd.neednet=1 \ console=ttysclp0 \ coreos.live.rootfs_url=<rootfs_url> \ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ 2 zfcp.allow_lun_scan=0 \ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 random.trust_cpu=on rd.luks.options=discard \ ignition.firstboot ignition.platform.id=metal \ console=tty1 console=ttyS1,115200n8 \ coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are booting. Only HTTP and HTTPS protocols are supported. 2 For the ip parameter, assign the IP address automatically using DHCP, or manually assign the IP address, as described in "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE". 3 The default is 1 . Omit this entry when using an OSA network adapter. 4 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. Omit this entry for FCP-type disks. 5 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. Leave all other parameters unchanged. Punch the kernel.img , generic.parm , and initrd.img files to the virtual reader of the z/VM guest virtual machine. For more information, see PUNCH (IBM Documentation). Tip You can use the CP PUNCH command or, if you use Linux, the vmur command, to transfer files between two z/VM guest virtual machines. Log in to the conversational monitor system (CMS) on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: USD ipl c For more information, see IPL (IBM Documentation). Additional resources Installing a cluster with z/VM on IBM Z and IBM LinuxONE 5.5.2. Adding IBM Z agents with RHEL KVM Use the following procedure to manually add IBM Z(R) agents with RHEL KVM. Only use this procedure for IBM Z(R) clusters with RHEL KVM. Procedure Boot your RHEL KVM machine. To deploy the virtual server, run the virt-install command with the following parameters: USD virt-install \ --name <vm_name> \ --autostart \ --ram=16384 \ --cpu host \ --vcpus=8 \ --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \ 1 --disk <qcow_image_path> \ --network network:macvtap ,mac=<mac_address> \ --graphics none \ --noautoconsole \ --wait=-1 \ --extra-args "rd.neednet=1 nameserver=<nameserver>" \ --extra-args "ip=<IP>::<nameserver>::<hostname>:enc1:none" \ --extra-args "coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img" \ --extra-args "random.trust_cpu=on rd.luks.options=discard" \ --extra-args "ignition.firstboot ignition.platform.id=metal" \ --extra-args "console=tty1 console=ttyS1,115200n8" \ --extra-args "coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" \ --osinfo detect=on,require=off 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 5.5.3. Adding IBM Z agents in a Logical Partition (LPAR) Use the following procedure to manually add IBM Z(R) agents to your cluster that runs in an LPAR environment. Use this procedure only for IBM Z(R) clusters running in an LPAR. Prerequisites You have Python 3 installed. Procedure Create a boot parameter file for the agents. Example parameter file rd.neednet=1 cio_ignore=all,!condev \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \ 4 random.trust_cpu=on rd.luks.options=discard 1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported. 2 For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE . 3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. 4 Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets. Generate the .ins and initrd.img.addrsize files by running the following Python script: The .ins file is a special file that includes installation data and is present on the FTP server. It can be accessed from the HMC system. This file contains details such as mapping of the location of installation data on the disk or FTP server, the memory locations where the data is to be copied. Note The .ins and initrd.img.addrsize files are not automatically generated as part of boot-artifacts from the installer. You must manually generate these files. Save the following script to a file, such as generate-files.py : Example of a Python file named generate-files.py file # The following commands retrieve the size of the `kernel` and `initrd`: KERNEL_IMG_PATH='./kernel.img' INITRD_IMG_PATH='./initrd.img' CMDLINE_PATH='./generic.prm' kernel_size=(stat -c%s KERNEL_IMG_PATH) initrd_size=(stat -c%s INITRD_IMG_PATH) # The following command rounds the `kernel` size up to the megabytes (MB) boundary. # This value is the starting address of `initrd.img`. offset=(( (kernel_size + 1048575) / 1048576 * 1048576 )) INITRD_IMG_NAME=(echo INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev) # The following commands create the kernel binary patch file that contains the `initrd` address and size: KERNEL_OFFSET=0x00000000 KERNEL_CMDLINE_OFFSET=0x00010480 INITRD_ADDR_SIZE_OFFSET=0x00010408 OFFSET_HEX=(printf '0x%08x\n' offset) # The following command converts the address and size to binary format: printf "(printf '%016x\n' USDinitrd_size)" | xxd -r -p > temp_size.bin # The following command concatenates the address and size binaries: cat temp_address.bin temp_size.bin > "USDINITRD_IMG_NAME.addrsize" # The following command deletes temporary files: rm -rf temp_address.bin temp_size.bin # The following commands create the `.ins` file. # The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied. KERNEL_IMG_PATH KERNEL_OFFSET INITRD_IMG_PATH OFFSET_HEX INITRD_IMG_NAME.addrsize INITRD_ADDR_SIZE_OFFSET CMDLINE_PATH KERNEL_CMDLINE_OFFSET Execute the script by running the following command: USD python3 <file_name>.py Transfer the initrd , kernel , generic.ins , and initrd.img.addrsize parameter files to the file server. For more information, see Booting Linux in LPAR mode (IBM documentation). Start the machine. Repeat the procedure for all other machines in the cluster. Additional resources Installing a cluster with z/VM on IBM Z and IBM LinuxONE
[ "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 bootArtifactsBaseURL: <asset_server_URL>", "openshift-install agent create pxe-files", "boot-artifacts ├─ agent.x86_64-initrd.img ├─ agent.x86_64.ipxe ├─ agent.x86_64-rootfs.img └─ agent.x86_64-vmlinuz", "rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=<rootfs_url> \\ 1 ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \\ 2 zfcp.allow_lun_scan=0 \\ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\"", "ipl c", "virt-install --name <vm_name> --autostart --ram=16384 --cpu host --vcpus=8 --location <path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img \\ 1 --disk <qcow_image_path> --network network:macvtap ,mac=<mac_address> --graphics none --noautoconsole --wait=-1 --extra-args \"rd.neednet=1 nameserver=<nameserver>\" --extra-args \"ip=<IP>::<nameserver>::<hostname>:enc1:none\" --extra-args \"coreos.live.rootfs_url=http://<http_server>:8080/agent.s390x-rootfs.img\" --extra-args \"random.trust_cpu=on rd.luks.options=discard\" --extra-args \"ignition.firstboot ignition.platform.id=metal\" --extra-args \"console=tty1 console=ttyS1,115200n8\" --extra-args \"coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8\" --osinfo detect=on,require=off", "rd.neednet=1 cio_ignore=all,!condev console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.persistent-kargs=console=ttysclp0 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 2 rd.znet=qeth,<network_adaptor_range>,layer2=1 rd.<disk_type>=<adapter> \\ 3 zfcp.allow_lun_scan=0 ai.ip_cfg_override=1 \\ 4 random.trust_cpu=on rd.luks.options=discard", "The following commands retrieve the size of the `kernel` and `initrd`: KERNEL_IMG_PATH='./kernel.img' INITRD_IMG_PATH='./initrd.img' CMDLINE_PATH='./generic.prm' kernel_size=(stat -c%s KERNEL_IMG_PATH) initrd_size=(stat -c%s INITRD_IMG_PATH) The following command rounds the `kernel` size up to the next megabytes (MB) boundary. This value is the starting address of `initrd.img`. offset=(( (kernel_size + 1048575) / 1048576 * 1048576 )) INITRD_IMG_NAME=(echo INITRD_IMG_PATH | rev | cut -d '/' -f 1 | rev) The following commands create the kernel binary patch file that contains the `initrd` address and size: KERNEL_OFFSET=0x00000000 KERNEL_CMDLINE_OFFSET=0x00010480 INITRD_ADDR_SIZE_OFFSET=0x00010408 OFFSET_HEX=(printf '0x%08x\\n' offset) The following command converts the address and size to binary format: printf \"(printf '%016x\\n' USDinitrd_size)\" | xxd -r -p > temp_size.bin The following command concatenates the address and size binaries: cat temp_address.bin temp_size.bin > \"USDINITRD_IMG_NAME.addrsize\" The following command deletes temporary files: rm -rf temp_address.bin temp_size.bin The following commands create the `.ins` file. The file is based on the paths of the `kernel.img`, `initrd.img`, `initrd.img.addrsize`, and `cmdline` files and the memory locations where the data is to be copied. KERNEL_IMG_PATH KERNEL_OFFSET INITRD_IMG_PATH OFFSET_HEX INITRD_IMG_NAME.addrsize INITRD_ADDR_SIZE_OFFSET CMDLINE_PATH KERNEL_CMDLINE_OFFSET", "python3 <file_name>.py" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_an_on-premise_cluster_with_the_agent-based_installer/prepare-pxe-assets-agent
Chapter 2. General Updates
Chapter 2. General Updates Users with any UID are now able to log in after the update to RHEL 7 Since Red Hat Enterprise Linux 7.3, the default value of the first_valid_uid configuration option of Dovecot changed from 500 in Red Hat Enterprise Linux 6 to 1000 in Red Hat Enterprise Linux 7. Consequently, if a Red Hat Enterprise Linux 6 installation did not have first_valid_uid explicitly defined, the Dovecot configuration did not allow users with UID less than 1000 to log in after the update to Red Hat Enterprise Linux 7. Note that only installations where first_valid_uid was not explicitly defined were affected. This problem has been addressed by the post-upgrade script, which now changes the first_valid_uid value from 1000 to the original value on the source system. As a result, users with any UID are able to log in after the update to Red Hat Enterprise Linux 7. (BZ# 1388967 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_general_updates
Chapter 10. Verifying connectivity to an endpoint
Chapter 10. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 10.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 10.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. 10.3. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 10.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 10.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 10.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 10.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 10.4. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z"
[ "oc get podnetworkconnectivitycheck -n openshift-network-diagnostics", "NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m", "oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml", "apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/verifying-connectivity-endpoint
Chapter 41. Configuring the environment mode in KIE Server and Business Central
Chapter 41. Configuring the environment mode in KIE Server and Business Central You can set KIE Server to run in production mode or in development mode. Development mode provides a flexible deployment policy that enables you to update existing deployment units (KIE containers) while maintaining active process instances for small changes. It also enables you to reset the deployment unit state before updating active process instances for larger changes. Production mode is optimal for production environments, where each deployment creates a new deployment unit. In a development environment, you can click Deploy in Business Central to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and replace all instances. The time you deploy or redeploy the built KJAR, the deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option in Business Central is disabled and you can click only Deploy to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server. Procedure To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production . To configure the deployment behavior for a project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option. Note By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/configuring-environment-mode-proc_configuring-central
Chapter 2. Power monitoring overview
Chapter 2. Power monitoring overview Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. About power monitoring You can use power monitoring for Red Hat OpenShift to monitor the power usage and identify power-consuming containers running in an OpenShift Container Platform cluster. Power monitoring collects and exports energy-related system statistics from various components, such as CPU and DRAM. It provides granular power consumption data for Kubernetes pods, namespaces, and nodes. Warning Power monitoring Technology Preview works only in bare-metal deployments. Most public cloud vendors do not expose Kernel Power Management Subsystems to virtual machines. 2.2. Power monitoring architecture Power monitoring is made up of the following major components: The Power monitoring Operator For administrators, the Power monitoring Operator streamlines the monitoring of power usage for workloads by simplifying the deployment and management of Kepler in an OpenShift Container Platform cluster. The setup and configuration for the Power monitoring Operator are simplified by adding a Kepler custom resource definition (CRD). The Operator also manages operations, such as upgrading, removing, configuring, and redeploying Kepler. Kepler Kepler is a key component of power monitoring. It is responsible for monitoring the power usage of containers running in OpenShift Container Platform. It generates metrics related to the power usage of both nodes and containers. 2.3. Kepler hardware and virtualization support Kepler is the key component of power monitoring that collects real-time power consumption data from a node through one of the following methods: Kernel Power Management Subsystem (preferred) rapl-sysfs : This requires access to the /sys/class/powercap/intel-rapl host file. rapl-msr : This requires access to the /dev/cpu/*/msr host file. The estimator power source Without access to the kernel's power cap subsystem, Kepler uses a machine learning model to estimate the power usage of the CPU on the node. Warning The estimator feature is experimental, not supported, and should not be relied upon. You can identify the power estimation method for a node by using the Power Monitoring / Overview dashboard. 2.4. About FIPS compliance for Power monitoring Operator Starting with version 0.4, Power monitoring Operator for Red Hat OpenShift is FIPS compliant. When deployed on an OpenShift Container Platform cluster in FIPS mode, it uses Red Hat Enterprise Linux (RHEL) cryptographic libraries validated by National Institute of Standards and Technology (NIST). For details on the NIST validation program, see Cryptographic module validation program . For the latest NIST status of RHEL cryptographic libraries, see Compliance activities and government standards . To enable FIPS mode, you must install Power monitoring Operator for Red Hat OpenShift on an OpenShift Container Platform cluster. For more information, see "Do you need extra security for your cluster?". 2.5. Additional resources Power monitoring dashboards overview Do you need extra security for your cluster?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/power-monitoring-overview
Chapter 2. Reducing resource consumption of OpenShift Pipelines
Chapter 2. Reducing resource consumption of OpenShift Pipelines If you use clusters in multi-tenant environments you must control the consumption of CPU, memory, and storage resources for each project and Kubernetes object. This helps prevent any one application from consuming too many resources and affecting other applications. To define the final resource limits that are set on the resulting pods, Red Hat OpenShift Pipelines use resource quota limits and limit ranges of the project in which they are executed. To restrict resource consumption in your project, you can: Set and manage resource quotas to limit the aggregate resource consumption. Use limit ranges to restrict resource consumption for specific objects, such as pods, images, image streams, and persistent volume claims. 2.1. Understanding resource consumption in pipelines Each task consists of a number of required steps to be executed in a particular order defined in the steps field of the Task resource. Every task runs as a pod, and each step runs as a container within that pod. The Resources field in the steps spec specifies the limits for resource consumption. By default, the resource requests for the CPU, memory, and ephemeral storage are set to BestEffort (zero) values or to the minimums set through limit ranges in that project. Example configuration of resource requests and limits for a step spec: steps: - name: <step_name> computeResources: requests: memory: 2Gi cpu: 600m limits: memory: 4Gi cpu: 900m When the LimitRange parameter and the minimum values for container resource requests are specified in the project in which the pipeline and task runs are executed, Red Hat OpenShift Pipelines looks at all the LimitRange values in the project and uses the minimum values instead of zero. Example configuration of limit range parameters at a project level apiVersion: v1 kind: LimitRange metadata: name: <limit_container_resource> spec: limits: - max: cpu: "600m" memory: "2Gi" min: cpu: "200m" memory: "100Mi" default: cpu: "500m" memory: "800Mi" defaultRequest: cpu: "100m" memory: "100Mi" type: Container ... 2.2. Mitigating extra resource consumption in pipelines When you have resource limits set on the containers in your pod, OpenShift Container Platform sums up the resource limits requested as all containers run simultaneously. To consume the minimum amount of resources needed to execute one step at a time in the invoked task, Red Hat OpenShift Pipelines requests the maximum CPU, memory, and ephemeral storage as specified in the step that requires the most amount of resources. This ensures that the resource requirements of all the steps are met. Requests other than the maximum values are set to zero. However, this behavior can lead to higher resource usage than required. If you use resource quotas, this could also lead to unschedulable pods. For example, consider a task with two steps that uses scripts, and that does not define any resource limits and requests. The resulting pod has two init containers (one for entrypoint copy, the other for writing scripts) and two containers, one for each step. OpenShift Container Platform uses the limit range set up for the project to compute required resource requests and limits. For this example, set the following limit range in the project: apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container In this scenario, each init container uses a request memory of 1Gi (the max limit of the limit range), and each container uses a request memory of 500Mi. Thus, the total memory request for the pod is 2Gi. If the same limit range is used with a task of ten steps, the final memory request is 5Gi, which is higher than what each step actually needs, that is 500Mi (since each step runs after the other). Thus, to reduce resource consumption of resources, you can: Reduce the number of steps in a given task by grouping different steps into one bigger step, using the script feature, and the same image. This reduces the minimum requested resource. Distribute steps that are relatively independent of each other and can run on their own to multiple tasks instead of a single task. This lowers the number of steps in each task, making the request for each task smaller, and the scheduler can then run them when the resources are available. 2.3. Additional resources Setting compute resource quota for OpenShift Pipelines Resource quotas per project Restricting resource consumption using limit ranges Resource requests and limits in Kubernetes
[ "spec: steps: - name: <step_name> computeResources: requests: memory: 2Gi cpu: 600m limits: memory: 4Gi cpu: 900m", "apiVersion: v1 kind: LimitRange metadata: name: <limit_container_resource> spec: limits: - max: cpu: \"600m\" memory: \"2Gi\" min: cpu: \"200m\" memory: \"100Mi\" default: cpu: \"500m\" memory: \"800Mi\" defaultRequest: cpu: \"100m\" memory: \"100Mi\" type: Container", "apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/managing_performance_and_resource_use/reducing-pipelines-resource-consumption
Chapter 2. Preparing your environment for Satellite installation in an IPv6 network
Chapter 2. Preparing your environment for Satellite installation in an IPv6 network You can install and use Satellite in an IPv6 network. Before installing Satellite in an IPv6 network, view the limitations and ensure that you meet the requirements. To provision hosts in an IPv6 network, after installing Satellite, you must also configure Satellite for the UEFI HTTP boot provisioning. For more information, see Section 4.5, "Configuring Satellite for UEFI HTTP boot provisioning in an IPv6 network" . 2.1. Limitations of Satellite installation in an IPv6 network Satellite installation in an IPv6 network has the following limitations: You can install Satellite and Capsules in IPv6-only systems, dual-stack installation is not supported. Although Satellite provisioning templates include IPv6 support for PXE and HTTP (iPXE) provisioning, the only tested and certified provisioning workflow is the UEFI HTTP Boot provisioning. This limitation only relates to users who plan to use Satellite to provision hosts. 2.2. Requirements for Satellite installation in an IPv6 network Before installing Satellite in an IPv6 network, ensure that you meet the following requirements: You must deploy an external DHCP IPv6 server as a separate unmanaged service to bootstrap clients into GRUB2, which then configures IPv6 networking either using DHCPv6 or assigning static IPv6 address. This is required because the DHCP server in Red Hat Enterprise Linux (ISC DHCP) does not provide an integration API for managing IPv6 records, therefore the Capsule DHCP plugin that provides DHCP management is limited to IPv4 subnets. You must deploy an external HTTP proxy server that supports both IPv4 and IPv6. This is required because Red Hat Content Delivery Network distributes content only over IPv4 networks, therefore you must use this proxy to pull content into the Satellite on your IPv6 network. You must configure Satellite to use this dual stack (supporting both IPv4 and IPv6) HTTP proxy server as the default proxy. For more information, see Adding a Default HTTP Proxy to Satellite .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/preparing-environment-for-installation-in-ipv6-network_satellite
Chapter 21. Configuring Network Plugins
Chapter 21. Configuring Network Plugins The director includes environment files to help configure third-party network plugins: 21.1. Fujitsu Converged Fabric (C-Fabric) You can enable the Fujitsu Converged Fabric (C-Fabric) plugin using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml . Copy the environment file to your templates subdirectory: Edit the resource_registry to use an absolute path: Review the parameter_defaults in /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml : NeutronFujitsuCfabAddress - The telnet IP address of the C-Fabric. (string) NeutronFujitsuCfabUserName - The C-Fabric username to use. (string) NeutronFujitsuCfabPassword - The password of the C-Fabric user account. (string) NeutronFujitsuCfabPhysicalNetworks - List of <physical_network>:<vfab_id> tuples that specify physical_network names and their corresponding vfab IDs. (comma_delimited_list) NeutronFujitsuCfabSharePprofile - Determines whether to share a C-Fabric pprofile among neutron ports that use the same VLAN ID. (boolean) NeutronFujitsuCfabPprofilePrefix - The prefix string for pprofile name. (string) NeutronFujitsuCfabSaveConfig - Determines whether to save the configuration. (boolean) To apply the template to your deployment, include the environment file in the openstack overcloud deploy command. For example: 21.2. Fujitsu FOS Switch You can enable the Fujitsu FOS Switch plugin using the environment file located at /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml . Copy the environment file to your templates subdirectory: Edit the resource_registry to use an absolute path: Review the parameter_defaults in /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml : NeutronFujitsuFosswIps - The IP addresses of all FOS switches. (comma_delimited_list) NeutronFujitsuFosswUserName - The FOS username to use. (string) NeutronFujitsuFosswPassword - The password of the FOS user account. (string) NeutronFujitsuFosswPort - The port number to use for the SSH connection. (number) NeutronFujitsuFosswTimeout - The timeout period of the SSH connection. (number) NeutronFujitsuFosswUdpDestPort - The port number of the VXLAN UDP destination on the FOS switches. (number) NeutronFujitsuFosswOvsdbVlanidRangeMin - The minimum VLAN ID in the range that is used for binding VNI and physical port. (number) NeutronFujitsuFosswOvsdbPort - The port number for the OVSDB server on the FOS switches. (number) To apply the template to your deployment, include the environment file in the openstack overcloud deploy command. For example:
[ "cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-cfab.yaml /home/stack/templates/", "resource_registry: OS::TripleO::Services::NeutronML2FujitsuCfab: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-cfab.yaml", "openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-cfab.yaml [OTHER OPTIONS]", "cp /usr/share/openstack-tripleo-heat-templates/environments/neutron-ml2-fujitsu-fossw.yaml /home/stack/templates/", "resource_registry: OS::TripleO::Services::NeutronML2FujitsuFossw: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-plugin-ml2-fujitsu-fossw.yaml", "openstack overcloud deploy --templates -e /home/stack/templates/neutron-ml2-fujitsu-fossw.yaml [OTHER OPTIONS]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-network_plugins
Chapter 5. RHEL 8.0.0 release
Chapter 5. RHEL 8.0.0 release 5.1. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8. 5.1.1. The web console Note The web console's Subscriptions page is now provided by the new subscription-manager-cockpit package. A firewall interface has been added to the web console The Networking page in the RHEL 8 web console now includes a Firewall section. In this section, users can enable or disable the firewall, as well as add, remove, and modify firewall rules. (BZ#1647110) The web console is now available by default Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux default repositories, and can therefore be immediately installed on a registered RHEL 8 system. In addition, on a non-minimal installation of RHEL 8, the web console is automatically installed and firewall ports required by the console are automatically open. A system message has also been added prior to login that provides information about how to enable or access the web console. (JIRA:RHELPLAN-10355) Better IdM integration for the web console If your system is enrolled in an Identity Management (IdM) domain, the RHEL 8 web console now uses the domain's centrally managed IdM resources by default. This includes the following benefits: The IdM domain's administrators can use the web console to manage the local machine. The console's web server automatically switches to a certificate issued by the IdM certificate authority (CA) and accepted by browsers. Users with a Kerberos ticket in the IdM domain do not need to provide login credentials to access the web console. SSH hosts known to the IdM domain are accessible to the web console without manually adding an SSH connection. Note that for IdM integration with the web console to work properly, the user first needs to run the ipa-advise utility with the enable-admins-sudo option in the IdM master system. (JIRA:RHELPLAN-3010) The web console is now compatible with mobile browsers With this update, the web console menus and pages can be navigated on mobile browser variants. This makes it possible to manage systems using the RHEL 8 web console from a mobile device. (JIRA:RHELPLAN-10352) The web console front page now displays missing updates and subscriptions If a system managed by the RHEL 8 web console has outdated packages or a lapsed subscription, a warning is now displayed on the web console front page of the system. (JIRA:RHELPLAN-10353) The web console now supports PBD enrollment With this update, you can use the the RHEL 8 web console interface to apply Policy-Based Decryption (PBD) rules to disks on managed systems. This uses the Clevis decryption client to facilitate a variety of security management functions in the web console, such as automatic unlocking of LUKS-encrypted disk partitions. (JIRA:RHELPLAN-10354) Virtual Machines can now be managed using the web console The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the user to create and manage libvirt-based virtual machines. (JIRA:RHELPLAN-2896) 5.1.2. Installer and image creation Installing RHEL from a DVD using SE and HMC is now fully supported on IBM Z The installation of Red Hat Enterprise Linux 8 on IBM Z hardware from a DVD using the Support Element (SE) and Hardware Management Console (HMC) is now fully supported. This addition simplifies the installation process on IBM Z with SE and HMC . When booting from a binary DVD, the installer prompts the user to enter additional kernel parameters. To set the DVD as an installation source, append inst.repo=hmc to the kernel parameters. The installer then enables SE and HMC file access, fetches the images for stage2 from the DVD, and provides access to the packages on the DVD for software selection. The new feature eliminates the requirement of an external network setup and expands the installation options. (BZ#1500792) Installer now supports the LUKS2 disk encryption format Red Hat Enterprise Linux 8 installer now uses the LUKS2 format by default but you can select a LUKS version from Anaconda's Custom Partitioning window or by using the new options in Kickstart's autopart , logvol , part , and RAID commands. LUKS2 provides many improvements and features, for example, it extends the capabilities of the on-disk format and provides flexible ways of storing metadata. (BZ#1547908) Anaconda supports System Purpose in RHEL 8 Previously, Anaconda did not provide system purpose information to Subscription Manager . In Red Hat Enterprise Linux 8.0, you can set the intended purpose of the system during installation by using Anaconda's System Purpose window or Kickstart's syspurpose command. When the installation completes, Subscription Manager uses the system purpose information when subscribing the system. (BZ#1612060) Pykickstart supports System Purpose in RHEL 8 Previously, it was not possible for the pykickstart library to provide system purpose information to Subscription Manager . In Red Hat Enterprise Linux 8.0, pykickstart parses the new syspurpose command and records the intended purpose of the system during automated and partially-automated installation. The information is then passed to Anaconda , saved on the newly-installed system, and available for Subscription Manager when subscribing the system. (BZ#1612061) Anaconda supports a new kernel boot parameter in RHEL 8 Previously, you could only specify a base repository from the kernel boot parameters. In Red Hat Enterprise Linux 8, a new kernel parameter, inst.addrepo=<name>,<url> , allows you to specify an additional repository during installation. This parameter has two mandatory values: the name of the repository and the URL that points to the repository. For more information, see https://anaconda-installer.readthedocs.io/en/latest/boot-options.html#inst-addrepo (BZ#1595415) Anaconda supports a unified ISO in RHEL 8 In Red Hat Enterprise Linux 8.0, a unified ISO automatically loads the BaseOS and AppStream installation source repositories. This feature works for the first base repository that is loaded during installation. For example, if you boot the installation with no repository configured and have the unified ISO as the base repository in the GUI, or if you boot the installation using the inst.repo= option that points to the unified ISO. As a result, the AppStream repository is enabled under the Additional Repositories section of the Installation Source GUI window. You cannot remove the AppStream repository or change its settings but you can disable it in Installation Source . This feature does not work if you boot the installation using a different base repository and then change it to the unified ISO. If you do that, the base repository is replaced. However, the AppStream repository is not replaced and points to the original file. (BZ#1610806) Anaconda can install modular packages in Kickstart scripts The Anaconda installer has been extended to handle all features related to application streams: modules, streams and profiles. Kickstart scripts can now enable module and stream combinations, install module profiles, and install modular packages. For more information, see Performing an advanced RHEL installation . (JIRA:RHELPLAN-1943) The nosmt boot option is now available in the RHEL 8 installation options The nosmt boot option is available in the installation options that are passed to a newly-installed RHEL 8 system. (BZ#1677411) RHEL 8 supports installing from a repository on a local hard drive Previously, installing RHEL from a hard drive required an ISO image as the installation source. However, the RHEL 8 ISO image might be too large for some file systems; for example, the FAT32 file system cannot store files larger than 4 GiB. In RHEL 8, you can enable installation from a repository on a local hard drive. You only need to specify the directory instead of the ISO image. For example:`inst.repo=hd:<device>:<path to the repository>` (BZ#1502323) Custom system image creation with Image Builder is available in RHEL 8 The Image Builder tool enables users to create customized RHEL images. Image Builder is available in AppStream in the lorax-composer package. With Image Builder, users can create custom system images which include additional packages. Image Builder functionality can be accessed through: a graphical user interface in the web console a command line interface in the composer-cli tool. Image Builder output formats include, among others: live ISO disk image qcow2 file for direct use with a virtual machine or OpenStack file system image file cloud images for Azure, VMWare and AWS To learn more about Image Builder, see the documentation title Composing a customized RHEL system image . (JIRA:RHELPLAN-7291, BZ#1628645, BZ#1628646, BZ#1628647, BZ#1628648) Added new kickstart commands: authselect and modules With this release, the following kickstart commands are added: authselect : Use the authselect command to set up the system authentication options during installation. You can use authselect as a replacement for deprecated auth or authconfig Kickstart commands. For more information, see the authselect section in the Performing an advanced installation guide. module : Use the module command to enable a package module stream within the kickstart script. For more information, see the module section in the Performing an advanced installation guide. (BZ#1972210) 5.1.3. Kernel Kernel version in RHEL 8.0 Red Hat Enterprise Linux 8.0 is distributed with the kernel version 4.18.0-80. (BZ#1797671) ARM 52-bit physical addressing is now available With this update, support for 52-bit physical addressing (PA) for the 64-bit ARM architecture is available. This provides larger address space than 48-bit PA. (BZ#1643522) The IOMMU code supports 5-level page tables in RHEL 8 The I/O memory management unit (IOMMU) code in the Linux kernel has been updated to support 5-level page tables in Red Hat Enterprise Linux 8. (BZ#1485546) Support for 5-level paging New P4d_t software page table type has been added into the Linux kernel in order to support 5-level paging in Red Hat Enterprise Linux 8. (BZ#1485532) Memory management supports 5-level page tables With Red Hat Enterprise Linux 7, existing memory bus had 48/46 bit of virtual/physical memory addressing capacity, and the Linux kernel implemented 4 levels of page tables to manage these virtual addresses to physical addresses. The physical bus addressing line put the physical memory upper limit capacity at 64 TB. These limits have been extended to 57/52 bit of virtual/physical memory addressing with 128 PiB of virtual address space and 4 PB of physical memory capacity. With the extended address range, the memory management in Red Hat Enterprise Linux 8 adds support for 5-level page table implementation, to be able to handle the expanded address range. (BZ#1485525) kernel-signing-ca.cer is moved to kernel-core in RHEL 8 In all versions of Red Hat Enterprise Linux 7, the kernel-signing-ca.cer public key was located in the kernel-doc package. However, in Red Hat Enterprise Linux 8, kernel-signing-ca.cer has been relocated to the kernel-core package for every architecture. (BZ#1638465) Spectre V2 mitigation default changed from IBRS to Retpolines The default mitigation for the Spectre V2 vulnerability (CVE-2017-5715) for systems with the 6th Generation Intel Core Processors and its close derivatives [1] has changed from Indirect Branch Restricted Speculation (IBRS) to Retpolines in Red Hat Enterprise Linux 8. Red Hat has implemented this change as a result of Intel's recommendations to align with the defaults used in the Linux community and to restore lost performance. However, note that using Retpolines in some cases may not fully mitigate Spectre V2. Intel's Retpoline document [2] describes any cases of exposure. This document also states that the risk of an attack is low. For use cases where complete Spectre V2 mitigation is desired, a user can select IBRS through the kernel boot line by adding the spectre_v2=ibrs flag. If one or more kernel modules were not built with the Retpoline support, the /sys/devices/system/cpu/vulnerabilities/spectre_v2 file will indicate vulnerability and the /var/log/messages file will identify the offending modules. See How to determine which modules are responsible for spectre_v2 returning "Vulnerable: Retpoline with unsafe module(s)"? for further information. [1] "6th generation Intel Core Processors and its close derivatives" are what the Intel's Retpolines document refers to as "Skylake-generation". [2] Retpoline: A Branch Target Injection Mitigation - White Paper (BZ#1651806) Intel(R) Omni-Path Architecture (OPA) Host Software Intel Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Intel Omni-Path Architecture documentation, see: https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_RHEL_8_RN_K51383.pdf ( BZ#1683712 ) NUMA supports more nodes in RHEL 8 With this update, the Non-Uniform Memory Access (NUMA) node count has been increased from 4 NUMA nodes to 8 NUMA nodes in Red Hat Enterprise Linux 8 on systems with the 64-bit ARM architecture. (BZ#1550498) IOMMU passthrough is now enabled by default in RHEL 8 The Input/Output Memory Management Unit (IOMMU) passthrough has been enabled by default. This provides improved performance for AMD systems because Direct Memory Access (DMA) remapping is disabled for the host. This update brings consistency with Intel systems where DMA remapping is also disabled by default. Users may disable such behavior (and enable DMA remapping) by specifying either iommu.passthrough=off or iommu=nopt parameters on the kernel command line, including the hypervisor. (BZ#1658391) RHEL8 kernel now supports 5-level page tables Red Hat Enterprise Linux kernel now fully supports future Intel processors with up to 5 levels of page tables. This enables the processors to support up to 4PB of physical memory and 128PB of virtual address space. Applications that utilize large amounts of memory can now use as much memory as possible as provided by the system without the constraints of 4-level page tables. (BZ#1623590) RHEL8 kernel supports enhanced IBRS for future Intel CPUs Red Hat Enterprise Linux kernel now supports the use of enhanced Indirect Branch Restricted Speculation (IBRS) capability to mitigate the Spectre V2 vulnerability. When enabled, IBRS will perform better than Retpolines (default) to mitigate Spectre V2 and will not interfere with Intel Control-flow Enforcement technology. As a result, the performance penalty of enabling the mitigation for Spectre V2 will be smaller on future Intel CPUs. (BZ#1614144) bpftool for inspection and manipulation of eBPF-based programs and maps added The bpftool utility that serves for inspection and simple manipulation of programs and maps based on extended Berkeley Packet Filtering (eBPF) has been added into the Linux kernel. bpftool is a part of the kernel source tree, and is provided by the bpftool package, which is included as a sub-package of the kernel package. (BZ#1559607) The kernel-rt sources have been updated The kernel-rt sources have been updated to use the latest RHEL kernel source tree. The latest kernel source tree is now using the upstream v4.18 realtime patch set, which provides a number of bug fixes and enhancements over the version. (BZ#1592977) 5.1.4. Software management YUM performance improvement and support for modular content On Red Hat Enterprise Linux 8, installing software is ensured by the new version of the YUM tool, which is based on the DNF technology ( YUM v4 ). YUM v4 has the following advantages over the YUM v3 used on RHEL 7: Increased performance Support for modular content Well-designed stable API for integration with tooling For detailed information about differences between the new YUM v4 tool and the version YUM v3 from RHEL 7, see Changes in DNF CLI compared to YUM . YUM v4 is compatible with YUM v3 when using from the command line, editing or creating configuration files. For installing software, you can use the yum command and its particular options in the same way as on RHEL 7. Selected yum plug-ins and utilities have been ported to the new DNF back end, and can be installed under the same names as in RHEL 7. They also provide compatibility symlinks, so the binaries, configuration files and directories can be found in usual locations. Note that the legacy Python API provided by YUM v3 is no longer available. Users are advised to migrate their plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully supported. The DNF Python API is available at DNF API Reference . The Libdnf and Hawkey APIs (both C and Python) are unstable, and will likely change during Red Hat Enterprise Linux 8 life cycle. For more details on changes of YUM packages and tools availability, see Considerations in adopting RHEL 8 . Some of the YUM v3 features may behave differently in YUM v4 . If any such change negatively impacts your workflows, please open a case with Red Hat Support, as described in How do I open and manage a support case on the Customer Portal? (BZ#1581198) Notable RPM features in RHEL 8 Red Hat Enterprise Linux 8 is distributed with RPM 4.14. This version introduces many enhancements over RPM 4.11, which is available in RHEL 7. The most notable features include: The debuginfo packages can be installed in parallel Support for weak dependencies Support for rich or boolean dependencies Support for packaging files above 4 GB in size Support for file triggers Also, the most notable changes include: Stricter spec-parser Simplified signature checking the output in non-verbose mode Additions and deprecation in macros (BZ#1581990) RPM now validates the entire package contents before starting an installation On Red Hat Enterprise Linux 7, the RPM utility verified payload contents of individual files while unpacking. However, this is insufficient for multiple reasons: If the payload is damaged, it is only noticed after executing script actions, which are irreversible. If the payload is damaged, upgrade of a package aborts after replacing some files of the version, which breaks a working installation. The hashes on individual files are performed on uncompressed data, which makes RPM vulnerable to decompressor vulnerabilities. On Red Hat Enterprise Linux 8, the entire package is validated prior to the installation in a separate step, using the best available hash. Packages built on Red Hat Enterprise Linux 8 use a new SHA-256 hash on the compressed payload. On signed packages, the payload hash is additionally protected by the signature, and thus cannot be altered without breaking a signature and other hashes on the package header. Older packages use the MD5 hash of the header and payload unless it is disabled by configuration. The %_pkgverify_level macro can be used to additionally enable enforcing signature verification before installation or disable the payload verification completely. In addition, the %_pkgverify_flags macro can be used to limit which hashes and signatures are allowed. For example, it is possible to disable the use of the weak MD5 hash at the cost of compatibility with older packages. (JIRA:RHELPLAN-10596) 5.1.5. Infrastructure services Notable changes in the recommended Tuned profile in RHEL 8 With this update, the recommended Tuned profile (reported by the tuned-adm recommend command) is now selected based on the following rules - the first rule that matches takes effect: If the syspurpose role (reported by the syspurpose show command) contains atomic , and at the same time: if Tuned is running on bare metal, the atomic-host profile is selected if Tuned is running in a virtual machine, the atomic-guest profile is selected If Tuned is running in a virtual machine, the virtual-guest profile is selected If the syspurpose role contains desktop or workstation and the chassis type (reported by dmidecode ) is Notebook , Laptop , or Portable , then the balanced profile is selected If none of the above rules matches, the throughput-performance profile is selected (BZ#1565598) Files produced by named can be written in the working directory Previously, the named daemon stored some data in the working directory, which has been read-only in Red Hat Enterprise Linux. With this update, paths have been changed for selected files into subdirectories, where writing is allowed. Now, default directory Unix and SELinux permissions allow writing into the directory. Files distributed inside the directory are still read-only to named . (BZ#1588592) Geolite Databases have been replaced by Geolite2 Databases Geolite Databases that were present in Red Hat Enterprise Linux 7 were replaced by Geolite2 Databases on Red Hat Enterprise Linux 8. Geolite Databases were provided by the GeoIP package. This package together with the legacy database is no longer supported in the upstream. Geolite2 Databases are provided by multiple packages. The libmaxminddb package includes the library and the mmdblookup command line tool, which enables manual searching of addresses. The geoipupdate binary from the legacy GeoIP package is now provided by the geoipupdate package, and is capable of downloading both legacy databases and the new Geolite2 databases. (JIRA:RHELPLAN-6746) CUPS logs are handled by journald In RHEL 8, the CUPS logs are no longer stored in specific files within the /var/log/cups directory, which was used in RHEL 7. In RHEL 8, all types of CUPS logs are centrally-logged in the systemd journald daemon together with logs from other programs. To access the CUPS logs, use the journalctl -u cups command. For more information, see Accessing the CUPS logs in the systemd journal . (JIRA:RHELPLAN-12764) Notable BIND features in RHEL 8 RHEL 8 includes BIND (Berkeley Internet Name Domain) in version 9.11. This version of the DNS server introduces multiple new features and feature changes compared to version 9.10. New features: A new method of provisioning secondary servers called Catalog Zones has been added. Domain Name System Cookies are now sent by the named service and the dig utility. The Response Rate Limiting feature can now help with mitigation of DNS amplification attacks. Performance of response-policy zone (RPZ) has been improved. A new zone file format called map has been added. Zone data stored in this format can be mapped directly into memory, which enables zones to load significantly faster. A new tool called delv (domain entity lookup and validation) has been added, with dig-like semantics for looking up DNS data and performing internal DNS Security Extensions (DNSSEC) validation. A new mdig command is now available. This command is a version of the`dig` command that sends multiple pipelined queries and then waits for responses, instead of sending one query and waiting for the response before sending the query. A new prefetch option, which improves the recursive resolver performance, has been added. A new in-view zone option, which allows zone data to be shared between views, has been added. When this option is used, multiple views can serve the same zones authoritatively without storing multiple copies in memory. A new max-zone-ttl option, which enforces maximum TTLs for zones, has been added. When a zone containing a higher TTL is loaded, the load fails. Dynamic DNS (DDNS) updates with higher TTLs are accepted but the TTL is truncated. New quotas have been added to limit queries that are sent by recursive resolvers to authoritative servers experiencing denial-of-service attacks. The nslookup utility now looks up both IPv6 and IPv4 addresses by default. The named service now checks whether other name server processes are running before starting up. When loading a signed zone, named now checks whether a Resource Record Signature's (RSIG) inception time is in the future, and if so, it regenerates the RRSIG immediately. Zone transfers now use smaller message sizes to improve message compression, which reduces network usage. Feature changes: The version 3 XML schema for the statistics channel, including new statistics and a flattened XML tree for faster parsing, is provided by the HTTP interface. The legacy version 2 XML schema is no longer supported. The named service now listens on both IPv6 and IPv4 interfaces by default. The named service no longer supports GeoIP. Access control lists (ACLs) defined by presumed location of query sender are unavailable. (JIRA:RHELPLAN-1820) 5.1.6. Shells and command-line tools The nobody user replaces nfsnobody In Red Hat Enterprise Linux 7, there was: the nobody user and group pair with the ID of 99, and the nfsnobody user and group pair with the ID of 65534, which is the default kernel overflow ID, too. Both of these have been merged into the nobody user and group pair, which uses the 65534 ID in Red Hat Enterprise Linux 8. New installations no longer create the nfsnobody pair. This change reduces the confusion about files that are owned by nobody but have nothing to do with NFS. (BZ#1591969) Version control systems in RHEL 8 RHEL 8 provides the following version control systems: Git 2.18 , a distributed revision control system with a decentralized architecture. Mercurial 4.8 , a lightweight distributed version control system, designed for efficient handling of large projects. Subversion 1.10 , a centralized version control system. Note that the Concurrent Versions System (CVS) and Revision Control System (RCS), available in RHEL 7, are not distributed with RHEL 8. (BZ#1693775) Notable changes in Subversion 1.10 Subversion 1.10 introduces a number of new features since the version 1.7 distributed in RHEL 7, as well as the following compatibility changes: Due to incompatibilities in the Subversion libraries used for supporting language bindings, Python 3 bindings for Subversion 1.10 are unavailable. As a consequence, applications that require Python bindings for Subversion are unsupported. Repositories based on Berkeley DB are no longer supported. Before migrating, back up repositories created with Subversion 1.7 by using the svnadmin dump command. After installing RHEL 8, restore the repositories using the svnadmin load command. Existing working copies checked out by the Subversion 1.7 client in RHEL 7 must be upgraded to the new format before they can be used from Subversion 1.10 . After installing RHEL 8, run the svn upgrade command in each working copy. Smartcard authentication for accessing repositories using https:// is no longer supported. (BZ#1571415) Notable changes in dstat RHEL 8 is distributed with a new version of the dstat tool. This tool is now a part of the Performance Co-Pilot (PCP) toolkit. The /usr/bin/dstat file and the dstat package name is now provided by the pcp-system-tools package. The new version of dstat introduces the following enhancements over dstat available in RHEL 7: python3 support Historical analysis Remote host analysis Configuration file plugins New performance metrics ( BZ#1684947 ) 5.1.7. Dynamic programming languages, web and database servers Python 3 is the default Python implementation in RHEL 8 Red Hat Enterprise Linux 8 is distributed with Python 3.6 . The package might not be installed by default. To install Python 3.6 , use the yum install python3 command. Python 2.7 is available in the python2 package. However, Python 2 will have a shorter life cycle and its aim is to facilitate a smoother transition to Python 3 for customers. Neither the default python package nor the unversioned /usr/bin/python executable is distributed with RHEL 8. Customers are advised to use python3 or python2 directly. Alternatively, administrators can configure the unversioned python command using the alternatives command. For more information, see Introduction to Python . (BZ#1580387) Python scripts must specify major version in interpreter directives at RPM build time In RHEL 8, executable Python scripts are expected to use interpreter directives (hashbangs) specifying explicitly at least the major Python version. The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package. This script attempts to correct interpreter directives in all executable files. When the script encounters ambiguous Python interpreter directives that do not specify the major version of Python, it generates errors and the RPM build fails. Examples of such ambiguous interpreter directives include: #! /usr/bin/python #! /usr/bin/env python To modify interpreter directives in the Python scripts causing these build errors at RPM build time, use the pathfix.py script from the platform-python-devel package: Multiple PATH s can be specified. If a PATH is a directory, pathfix.py recursively scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.pyUSD , not only those with an ambiguous hashbang. Add the command for running pathfix.py to the %prep section or at the end of the %install section. For more information, see Handling interpreter directives in Python scripts . (BZ#1583620) Notable changes in PHP Red Hat Enterprise Linux 8 is distributed with PHP 7.2 . This version introduces the following major changes over PHP 5.4 , which is available in RHEL 7: PHP uses FastCGI Process Manager (FPM) by default (safe for use with a threaded httpd ) The php_value and php-flag variables should no longer be used in the httpd configuration files; they should be set in pool configuration instead: /etc/php-fpm.d/*.conf PHP script errors and warnings are logged to the /var/log/php-fpm/www-error.log file instead of /var/log/httpd/error.log When changing the PHP max_execution_time configuration variable, the httpd ProxyTimeout setting should be increased to match The user running PHP scripts is now configured in the FPM pool configuration (the /etc/php-fpm.d/www.conf file; the apache user is the default) The php-fpm service needs to be restarted after a configuration change or after a new extension is installed The zip extension has been moved from the php-common package to a separate package, php-pecl-zip The following extensions have been removed: aspell mysql (note that the mysqli and pdo_mysql extensions are still available, provided by php-mysqlnd package) memcache (BZ#1580430, BZ#1691688 ) Notable changes in Ruby RHEL 8 provides Ruby 2.5 , which introduces numerous new features and enhancements over Ruby 2.0.0 available in RHEL 7. Notable changes include: Incremental garbage collector has been added. The Refinements syntax has been added. Symbols are now garbage collected. The USDSAFE=2 and USDSAFE=3 safe levels are now obsolete. The Fixnum and Bignum classes have been unified into the Integer class. Performance has been improved by optimizing the Hash class, improved access to instance variables, and the Mutex class being smaller and faster. Certain old APIs have been deprecated. Bundled libraries, such as RubyGems , Rake , RDoc , Psych , Minitest , and test-unit , have been updated. Other libraries, such as mathn , DL , ext/tk , and XMLRPC , which were previously distributed with Ruby , are deprecated or no longer included. The SemVer versioning scheme is now used for Ruby versioning. (BZ#1648843) Notable changes in Perl Perl 5.26 , distributed with RHEL 8, introduces the following changes over the version available in RHEL 7: Unicode 9.0 is now supported. New op-entry , loading-file , and loaded-file SystemTap probes are provided. Copy-on-write mechanism is used when assigning scalars for improved performance. The IO::Socket::IP module for handling IPv4 and IPv6 sockets transparently has been added. The Config::Perl::V module to access perl -V data in a structured way has been added. A new perl-App-cpanminus package has been added, which contains the cpanm utility for getting, extracting, building, and installing modules from the Comprehensive Perl Archive Network (CPAN) repository. The current directory . has been removed from the @INC module search path for security reasons. The do statement now returns a deprecation warning when it fails to load a file because of the behavioral change described above. The do subroutine(LIST) call is no longer supported and results in a syntax error. Hashes are randomized by default now. The order in which keys and values are returned from a hash changes on each perl run. To disable the randomization, set the PERL_PERTURB_KEYS environment variable to 0 . Unescaped literal { characters in regular expression patterns are no longer permissible. Lexical scope support for the USD_ variable has been removed. Using the defined operator on an array or a hash results in a fatal error. Importing functions from the UNIVERSAL module results in a fatal error. The find2perl , s2p , a2p , c2ph , and pstruct tools have been removed. The USD{^ENCODING} facility has been removed. The encoding pragma's default mode is no longer supported. To write source code in other encoding than UTF-8 , use the encoding's Filter option. The perl packaging is now aligned with upstream. The perl package installs also core modules, while the /usr/bin/perl interpreter is provided by the perl-interpreter package. In releases, the perl package included just a minimal interpreter, whereas the perl-core package included both the interpreter and the core modules. The IO::Socket::SSL Perl module no longer loads a certificate authority certificate from the ./certs/my-ca.pem file or the ./ca directory, a server private key from the ./certs/server-key.pem file, a server certificate from the ./certs/server-cert.pem file, a client private key from the ./certs/client-key.pem file, and a client certificate from the ./certs/client-cert.pem file. Specify the paths to the files explicitly instead. (BZ#1511131) Node.js new in RHEL Node.js , a software development platform for building fast and scalable network applications in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides Node.js 10 . (BZ#1622118) Notable changes in SWIG RHEL 8 includes the Simplified Wrapper and Interface Generator (SWIG) version 3.0, which provides numerous new features, enhancements, and bug fixes over the version 2.0 distributed in RHEL 7. Most notably, support for the C++11 standard has been implemented. SWIG now supports also Go 1.6 , PHP 7 , Octave 4.2 , and Python 3.5 . (BZ#1660051) Notable changes in Apache httpd RHEL 8 is distributed with the Apache HTTP Server 2.4.37. This version introduces the following changes over httpd available in RHEL 7: HTTP/2 support is now provided by the mod_http2 package, which is a part of the httpd module. Automated TLS certificate provisioning and renewal using the Automatic Certificate Management Environment (ACME) protocol is now supported with the mod_md package (for use with certificate providers such as Let's Encrypt ) The Apache HTTP Server now supports loading TLS certificates and private keys from hardware security tokens directly from PKCS#11 modules. As a result, a mod_ssl configuration can now use PKCS#11 URLs to identify the TLS private key, and, optionally, the TLS certificate in the SSLCertificateKeyFile and SSLCertificateFile directives. The multi-processing module (MPM) configured by default with the Apache HTTP Server has changed from a multi-process, forked model (known as prefork ) to a high-performance multi-threaded model, event . Any third-party modules that are not thread-safe need to be replaced or removed. To change the configured MPM, edit the /etc/httpd/conf.modules.d/00-mpm.conf file. See the httpd.conf(5) man page for more information. For more information about changes in httpd and its usage, see Setting up the Apache HTTP web server . (BZ#1632754, BZ#1527084, BZ#1581178) The nginx web server new in RHEL RHEL 8 introduces nginx 1.14 , a web and proxy server supporting HTTP and other protocols, with a focus on high concurrency, performance, and low memory usage. nginx was previously available only as a Software Collection. The nginx web server now supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. As a result, an nginx configuration can use PKCS#11 URLs to identify the TLS private key in the ssl_certificate_key directive. (BZ#1545526) Database servers in RHEL 8 RHEL 8 provides the following database servers: MySQL 8.0 , a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon, mysqld , and many client programs. MariaDB 10.3 , a multi-user, multi-threaded SQL database server. For all practical purposes, MariaDB is binary-compatible with MySQL . PostgreSQL 10 and PostgreSQL 9.6 , an advanced object-relational database management system (DBMS). Redis 5 , an advanced key-value store. It is often referred to as a data structure server because keys can contain strings, hashes, lists, sets, and sorted sets. Redis is provided for the first time in RHEL. Note that the NoSQL MongoDB database server is not included in RHEL 8.0 because it uses the Server Side Public License (SSPL). (BZ#1647908) Notable changes in MySQL 8.0 RHEL 8 is distributed with MySQL 8.0 , which provides, for example, the following enhancements: MySQL now incorporates a transactional data dictionary, which stores information about database objects. MySQL now supports roles, which are collections of privileges. The default character set has been changed from latin1 to utf8mb4 . Support for common table expressions, both nonrecursive and recursive, has been added. MySQL now supports window functions, which perform a calculation for each row from a query, using related rows. InnoDB now supports the NOWAIT and SKIP LOCKED options with locking read statements. GIS-related functions have been improved. JSON functionality has been enhanced. The new mariadb-connector-c packages provide a common client library for MySQL and MariaDB . This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8. In addition, the MySQL 8.0 server distributed with RHEL 8 is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in RHEL 8 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: See also Using MySQL . (BZ#1649891, BZ#1519450, BZ#1631400) Notable changes in MariaDB 10.3 MariaDB 10.3 provides numerous new features over the version 5.5 distributed in RHEL 7, such as: Common table expressions System-versioned tables FOR loops Invisible columns Sequences Instant ADD COLUMN for InnoDB Storage-engine independent column compression Parallel replication Multi-source replication In addition, the new mariadb-connector-c packages provide a common client library for MySQL and MariaDB . This library is usable with any version of the MySQL and MariaDB database servers. As a result, the user is able to connect one build of an application to any of the MySQL and MariaDB servers distributed with RHEL 8. Other notable changes include: MariaDB Galera Cluster , a synchronous multi-master cluster, is now a standard part of MariaDB . InnoDB is used as the default storage engine instead of XtraDB . The mariadb-bench subpackage has been removed. The default allowed level of the plug-in maturity has been changed to one level less than the server maturity. As a result, plug-ins with a lower maturity level that were previously working, will no longer load. See also Using MariaDB . (BZ#1637034, BZ#1519450, BZ#1688374 ) Notable changes in PostgreSQL RHEL 8.0 provides two versions of the PostgreSQL database server, distributed in two streams of the postgresql module: PostgreSQL 10 (the default stream) and PostgreSQL 9.6 . RHEL 7 includes PostgreSQL version 9.2. Notable changes in PostgreSQL 9.6 are, for example: Parallel execution of the sequential operations: scan , join , and aggregate Enhancements to synchronous replication Improved full-text search enabling users to search for phrases The postgres_fdw data federation driver now supports remote join , sort , UPDATE , and DELETE operations Substantial performance improvements, especially regarding scalability on multi-CPU-socket servers Major enhancements in PostgreSQL 10 include: Logical replication using the publish and subscribe keywords Stronger password authentication based on the SCRAM-SHA-256 mechanism Declarative table partitioning Improved query parallelism Significant general performance improvements Improved monitoring and control See also Using PostgreSQL . (BZ#1660041) Notable changes in Squid RHEL 8.0 is distributed with Squid 4.4 , a high-performance proxy caching server for web clients, supporting FTP, Gopher, and HTTP data objects. This release provides numerous new features, enhancements, and bug fixes over the version 3.5 available in RHEL 7. Notable changes include: Configurable helper queue size Changes to helper concurrency channels Changes to the helper binary Secure Internet Content Adaptation Protocol (ICAP) Improved support for Symmetric Multi Processing (SMP) Improved process management Removed support for SSL Removed Edge Side Includes (ESI) custom parser Multiple configuration changes ( BZ#1656871 ) Varnish Cache new in RHEL Varnish Cache , a high-performance HTTP reverse proxy, is provided for the first time in RHEL. It was previously available only as a Software Collection. Varnish Cache stores files or fragments of files in memory that are used to reduce the response time and network bandwidth consumption on future equivalent requests. RHEL 8.0 is distributed with Varnish Cache 6.0 . (BZ#1633338) 5.1.8. Desktop GNOME Shell, version 3.28 in RHEL 8 GNOME Shell, version 3.28 is available in Red Hat Enterprise Linux (RHEL) 8. Notable enhancements include: New GNOME Boxes features New on-screen keyboard Extended devices support, most significantly integration for the Thunderbolt 3 interface Improvements for GNOME Software, dconf-editor and GNOME Terminal (BZ#1649404) Wayland is the default display server With Red Hat Enterprise Linux 8, the GNOME session and the GNOME Display Manager (GDM) use Wayland as their default display server instead of the X.org server, which was used with the major version of RHEL. Wayland provides multiple advantages and improvements over X.org . Most notably: Stronger security model Improved multi-monitor handling Improved user interface (UI) scaling The desktop can control window handling directly. Note that the following features are currently unavailable or do not work as expected: Multi-GPU setups are not supported under Wayland . The NVIDIA binary driver does not work under Wayland . The xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. Note that other X.org utilities for manipulating the screen do not work under Wayland , either. Screen recording, remote desktop, and accessibility do not always work correctly under Wayland . No clipboard manager is available. Wayland ignores keyboard grabs issued by X11 applications, such as virtual machines viewers. Wayland inside guest virtual machines (VMs) has stability and performance problems, so it is recommended to use the X11 session for virtual environments. If you upgrade to RHEL 8 from a RHEL 7 system where you used the X.org GNOME session, your system continues to use X.org . The system also automatically falls back to X.org when the following graphics drivers are in use: The NVIDIA binary driver The cirrus driver The mga driver The aspeed driver You can disable the use of Wayland manually: To disable Wayland in GDM , set the WaylandEnable=false option in the /etc/gdm/custom.conf file. To disable Wayland in the GNOME session, select the legacy X11 option by using the cogwheel menu on the login screen after entering your login name. For more details on Wayland , see https://wayland.freedesktop.org/ . (BZ#1589678) Locating RPM packages that are in repositories not enabled by default Additional repositories for desktop are not enabled by default. The disablement is indicated by the enabled=0 line in the corresponding .repo file. If you attempt to install a package from such repository using PackageKit, PackageKit shows an error message announcing that the application is not available. To make the package available, replace previously used enabled=0 line in the respective .repo file with enabled=1 . (JIRA:RHELPLAN-2878) GNOME Sofware for package management The gnome-packagekit package that provided a collection of tools for package management in graphical environment on Red Hat Enterprise Linux 7 is no longer available. On Red Hat Enterprise Linux 8, similar functionality is provided by the GNOME Software utility, which enables you to install and update applications and gnome-shell extensions. GNOME Software is distributed in the gnome-software package. (JIRA:RHELPLAN-3001) Fractional scaling available for GNOME Shell on Wayland On a GNOME Shell on Wayland session, the fractional scaling feature is available. The feature makes it possible to scale the GUI by fractions, which improves the appearance of scaled GUI on certain displays. Note that the feature is currently considered experimental and is, therefore, disabled by default. To enable fractional scaling, run the following command: ( BZ#1668883 ) 5.1.9. Hardware enablement Firmware updates using fwupd are available RHEL 8 supports firmware updates, such as UEFI capsule, Device Firmware Upgrade (DFU), and others, using the fwupd daemon. The daemon allows session software to update device firmware on a local machine automatically. To view and apply updates, you can use: A GUI software manager, such as GNOME Software The fwupdmgr command-line tool The metadata files are automatically downloaded from the Linux Vendor Firmware Service (LVFS) secure portal, and submitted into fwupd over D-Bus. The updates that need to be applied are downloaded displaying user notifications and update details. The user must explicitly agree with the firmware update action before the update is performed. Note that the access to LVFS is disabled by default. To enable the access to LVFS, either click the slider in the sources dialog in GNOME Software, or run the fwupdmgr enable-remote lvfs command. If you use fwupdmgr to get the updates list, you will be asked if you want to enable LVFS. With access to LVFS, you will get firmware updates directly from the hardware vendor. Note that such updates have not been verified by Red Hat QA. (BZ#1504934) Memory Mode for Optane DC Persistent Memory technology is fully supported Intel Optane DC Persistent Memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. To use the Memory Mode technology, your system does not require any special drivers or specific certification. Memory Mode is transparent to the operating system. (BZ#1718422) 5.1.10. Identity Management New password syntax checks in Directory Server This enhancement adds new password syntax checks to Directory Server. Administrators can now, for example, enable dictionary checks, allow or deny using character sequences and palindromes. As a result, if enabled, the password policy syntax check in Directory Server enforces more secure passwords. (BZ#1334254) Directory Server now provides improved internal operations logging support Several operations in Directory Server, initiated by the server and clients, cause additional operations in the background. Previously, the server only logged for internal operations the Internal connection keyword, and the operation ID was always set to -1 . With this enhancement, Directory Server logs the real connection and operation ID. You can now trace the internal operation to the server or client operation that caused this operation. (BZ#1358706) The tomcatjss library supports OCSP checking using the responder from the AIA extension With this enhancement, the tomcatjss library supports Online Certificate Status Protocol (OCSP) checking using the responder from the Authority Information Access (AIA) extension of a certificate. As a result, administrators of Red Hat Certificate System can now configure OCSP checking that uses the URL from the AIA extension. (BZ#1636564) The pki subsystem-cert-find and pki subsystem-cert-show commands now show the serial number of certificates With this enhancement, the pki subsystem-cert-find and pki subsystem-cert-show commands in Certificate System show the serial number of certificates in their output. The serial number is an important piece of information and often required by multiple other commands. As a result, identifying the serial number of a certificate is now easier. (BZ#1566360) The pki user and pki group commands have been deprecated in Certificate System With this update, the new pki <subsystem> -user and pki <subsystem> -group commands replace the pki user and pki group commands in Certificate System. The replaced commands still works, but they display a message that the command is deprecated and refer to the new commands. (BZ#1394069) Certificate System now supports offline renewal of system certificates With this enhancement, administrators can use the offline renewal feature to renew system certificates configured in Certificate System. When a system certificate expires, Certificate System fails to start. As a result of the enhancement, administrators no longer need workarounds to replace an expired system certificate. ( BZ#1669257 ) Certificate System can now create CSRs with SKI extension for external CA signing With this enhancement, Certificate System supports creating a certificate signing request (CSR) with the Subject Key Identifier (SKI) extension for external certificate authority (CA) signing. Certain CAs require this extension either with a particular value or derived from the CA public key. As a result, administrators can now use the pki_req_ski parameter in the configuration file passed to the pkispawn utility to create a CSR with SKI extension. (BZ#1656856) SSSD no longer uses the fallback_homedir value from the [nss] section as fallback for AD domains Prior to RHEL 7.7, the SSSD fallback_homedir parameter in an Active Directory (AD) provider had no default value. If fallback_homedir was not set, SSSD used instead the value from the same parameter from the [nss] section in the /etc/sssd/sssd.conf file. To increase security, SSSD in RHEL 7.7 introduced a default value for fallback_homedir . As a consequence, SSSD no longer falls back to the value set in the [nss] section. If you want to use a different value than the default for the fallback_homedir parameter in an AD domain, you must manually set it in the domain's section. (BZ#1652719) SSSD now allows you to select one of the multiple Smartcard authentication devices By default, the System Security Services Daemon (SSSD) tries to detect a device for Smartcard authentication automatically. If there are multiple devices connected, SSSD selects the first one it detects. Consequently, you cannot select a particular device, which sometimes leads to failures. With this update, you can configure a new p11_uri option for the [pam] section of the sssd.conf configuration file. This option enables you to define which device is used for Smartcard authentication. For example, to select a reader with the slot id 2 detected by the OpenSC PKCS#11 module, add: to the [pam] section of sssd.conf . For details, see the man sssd.conf page. (BZ#1620123) Local users are cached by SSSD and served through the nss_sss module In RHEL 8, the System Security Services Daemon (SSSD) serves users and groups from the /etc/passwd and /etc/groups files by default. The sss nsswitch module precedes files in the /etc/nsswitch.conf file. The advantage of serving local users through SSSD is that the nss_sss module has a fast memory-mapped cache that speeds up Name Service Switch (NSS) lookups compared to accessing the disk and opening the files on each NSS request. Previously, the Name service cache daemon ( nscd ) helped accelerate the process of accessing the disk. However, using nscd in parallel with SSSD is cumbersome, as both SSSD and nscd use their own independent caching. Consequently, using nscd in setups where SSSD is also serving users from a remote domain, for example LDAP or Active Directory, can cause unpredictable behavior. With this update, the resolution of local users and groups is faster in RHEL 8. Note that the root user is never handled by SSSD, therefore root resolution cannot be impacted by a potential bug in SSSD. Note also that if SSSD is not running, the nss_sss module handles the situation gracefully by falling back to nss_files to avoid problems. You do not have to configure SSSD in any way, the files domain is added automatically. (JIRA:RHELPLAN-10439) KCM replaces KEYRING as the default credential cache storage In RHEL 8, the default credential cache storage is the Kerberos Credential Manager (KCM) which is backed by the sssd-kcm deamon. KCM overcomes the limitations of the previously used KEYRING, such as its being difficult to use in containerized environments because it is not namespaced, and to view and manage quotas. With this update, RHEL 8 contains a credential cache that is better suited for containerized environments and that provides a basis for building more features in future releases. (JIRA:RHELPLAN-10440) Active Directory users can now administer Identity Management With this update, RHEL 8 allows adding a user ID override for an Active Directory (AD) user as a member of an Identity Management (IdM) group. An ID override is a record describing what a specific AD user or group properties should look like within a specific ID view, in this case the Default Trust View. As a consequence of the update, the IdM LDAP server is able to apply access control rules for the IdM group to the AD user. AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys, or change their personal data. An AD administrator is able to fully administer IdM without having two different accounts and passwords. Note that currently, selected features in IdM may still be unavailable to AD users. (JIRA:RHELPLAN-10442) sssctl prints an HBAC rules report for an IdM domain With this update, the sssctl utility of the System Security Services Daemon (SSSD) can print an access control report for an Identity Management (IdM) domain. This feature meets the need of certain environments to see, for regulatory reasons, a list of users and groups that can access a specific client machine. Running sssctl access-report domain_name on an IdM client prints the parsed subset of host-based access control (HBAC) rules in the IdM domain that apply to the client machine. Note that no other providers than IdM support this feature. (JIRA:RHELPLAN-10443) Identity Management packages are available as a module In RHEL 8, the packages necessary for installing an Identity Management (IdM) server and client are shipped as a module. The client stream is the default stream of the idm module and you can download the packages necessary for installing the client without enabling the stream. The IdM server module stream is called the DL1 stream. The stream contains multiple profiles corresponding to different types of IdM servers: server, dns, adtrust, client, and default. To download the packages in a specific profile of the DL1 stream: Enable the stream. Switch to the RPMs delivered through the stream. Run the yum module install idm:DL1/profile_name command. To switch to a new module stream once you have already enabled a specific stream and downloaded packages from it: Remove all the relevant installed content and disable the current module stream. Enable the new module stream. (JIRA:RHELPLAN-10438) Session recording solution for RHEL 8 added A session recording solution has been added to Red Hat Enterprise Linux 8 (RHEL 8). A new tlog package and its associated web console session player enable to record and playback the user terminal sessions. The recording can be configured per user or user group via the System Security Services Daemon (SSSD) service. All terminal input and output is captured and stored in a text-based format in a system journal. The input is inactive by default for security reasons not to intercept raw passwords and other sensitive information. The solution can be used for auditing of user sessions on security-sensitive systems. In the event of a security breach, the recorded sessions can be reviewed as a part of a forensic analysis. The system administrators are now able to configure the session recording locally and view the result from the RHEL 8 web console interface or from the Command-Line Interface using the tlog-play utility. (JIRA:RHELPLAN-1473) authselect simplifies the configuration of user authentication This update introduces the authselect utility that simplifies the configuration of user authentication on RHEL 8 hosts, replacing the authconfig utility. authselect comes with a safer approach to PAM stack management that makes the PAM configuration changes simpler for system administrators. authselect can be used to configure authentication methods such as passwords, certificates, smart cards, and fingerprint. Note that authselect does not configure services required to join remote domains. This task is performed by specialized tools, such as realmd or ipa-client-install . (JIRA:RHELPLAN-10445) SSSD now enforces AD GPOs by default The default setting for the SSSD option ad_gpo_access_control is now enforcing . In RHEL 8, SSSD enforces access control rules based on Active Directory Group Policy Objects (GPOs) by default. Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. If you would not like to enforce GPOs, change the value of the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to permissive . (JIRA:RHELPLAN-51289) 5.1.11. Compilers and development tools Boost updated to version 1.66 The Boost C++ library has been updated to upstream version 1.66. The version of Boost included in Red Hat Enterprise Linux 7 is 1.53. For details, see the upstream changelogs: https://www.boost.org/users/history/ This update introduces the following changes breaking compatibility with versions: The bs_set_hook() function, the splay_set_hook() function from splay containers, and the bool splay = true extra parameter in the splaytree_algorithms() function in the Intrusive library have been removed. Comments or string concatenation in JSON files are no longer supported by the parser in the Property Tree library. Some distributions and special functions from the Math library have been fixed to behave as documented and raise an overflow_error instead of returning the maximum finite value. Some headers from the Math library have been moved into the directory libs/math/include_private . Behavior of the basic_regex<>::mark_count() and basic_regex<>::subexpression(n) functions from the Regex library has been changed to match their documentation. Use of variadic templates in the Variant library may break metaprogramming functions. The boost::python::numeric API has been removed. Users can use boost::python::numpy instead. Arithmetic operations on pointers to non-object types are no longer provided in the Atomic library. (BZ#1494495) Unicode 11.0.0 support The Red Hat Enterprise Linux core C library, glibc , has been updated to support the Unicode standard version 11.0.0. As a result, all wide character and multi-byte character APIs including transliteration and conversion between character sets provide accurate and correct information conforming to this standard. (BZ#1512004) The boost package is now independent of Python With this update, installing the boost package no longer installs the Boost.Python library as a dependency. In order to use Boost.Python , you need to explicitly install the boost-python3 or boost-python3-devel packages. (BZ#1616244) A new compat-libgfortran-48 package available For compatibility with Red Hat Enterprise Linux 6 and 7 applications using the Fortran library, a new compat-libgfortran-48 compatibility package is now available, which provides the libgfortran.so.3 library. (BZ#1607227) Retpoline support in GCC This update adds support for retpolines to GCC. A retpoline is a software construct used by the kernel to reduce overhead of mitigating Spectre Variant 2 attacks described in CVE-2017-5715. (BZ#1535774) Enhanced support for the 64-bit ARM architecture in toolchain components Toolchain components, GCC and binutils , now provide extended support for the 64-bit ARM architecture. For example: GCC and binutils now support Scalable Vector Extension (SVE). Support for the FP16 data type, provided by ARM v8.2, has been added to GCC . The FP16 data type improves performance of certain algorithms. Tools from binutils now support the ARM v8.3 architecture definition, including Pointer Authentication. The Pointer Authentication feature prevents malicious code from corrupting the normal execution of a program or the kernel by crafting their own function pointers. As a result, only trusted addresses are used when branching to different places in the code, which improves security. (BZ#1504980, BZ#1550501, BZ#1504995, BZ#1504993, BZ#1504994) Optimizations to glibc for IBM POWER systems This update provides a new version of glibc that is optimized for both IBM POWER 8 and IBM POWER 9 architectures. As a result, IBM POWER 8 and IBM POWER 9 systems now automatically switch to the appropriate, optimized glibc variant at run time. (BZ#1376834) GNU C Library updated to version 2.28 Red Hat Enterprise Linux 8 includes version 2.28 of the GNU C Library (glibc). Notable improvements include: Security hardening features: Secure binary files marked with the AT_SECURE flag ignore the LD_LIBRARY_PATH environment variable. Backtraces are no longer printed for stack checking failures to speed up shutdown and avoid running more code in a compromised environment. Performance improvements: Performance of the malloc() function has been improved with a thread local cache. Addition of the GLIBC_TUNABLES environment variable to alter library performance characteristics. Implementation of thread semaphores has been improved and new scalable pthread_rwlock_xxx() functions have been added. Performance of the math library has been improved. Support for Unicode 11.0.0 has been added. Improved support for 128-bit floating point numbers as defined by the ISO/IEC/IEEE 60559:2011, IEEE 754-2008, and ISO/IEC TS 18661-3:2015 standards has been added. Domain Name Service (DNS) stub resolver improvements related to the /etc/resolv.conf configuration file: Configuration is automatically reloaded when the file is changed. Support for an arbitrary number of search domains has been added. Proper random selection for the rotate option has been added. New features for development have been added, including: Linux wrapper functions for the preadv2 and pwritev2 kernel calls New functions including reallocarray() and explicit_bzero() New flags for the posix_spawnattr_setflags() function such as POSIX_SPAWN_SETSID (BZ#1512010, BZ#1504125, BZ#506398) CMake available in RHEL The CMake build system version 3.11 is available in Red Hat Enterprise Linux 8 as the cmake package. (BZ#1590139, BZ#1502802) make version 4.2.1 Red Hat Enterprise Linux 8 is distributed with the make build tool version 4.2.1. Notable changes include: When a recipe fails, the name of the makefile and line number of the recipe are shown. The --trace option has been added to enable tracing of targets. When this option is used, every recipe is printed before invocation even if it would be suppressed, together with the file name and line number where this recipe is located, and also with the prerequisites causing it to be invoked. Mixing explicit and implicit rules no longer cause make to terminate execution. Instead, a warning is printed. Note that this syntax is deprecated and may be completely removed in the future. The USD(file ... ) function has been added to write text to a file. When called without a text argument, it only opens and immediately closes the file. A new option, --output-sync or -O , causes an output from multiple jobs to be grouped per job and enables easier debugging of parallel builds. The --debug option now accepts also the n (none) flag to disable all currently enabled debugging settings. The != shell assignment operator has been added as an alternative to the USD(shell ... ) function to increase compatibility with BSD makefiles. For more details and differences between the operator and the function, see the GNU make manual. Note that as a consequence, variables with a name ending in exclamation mark and immediately followed by assignment, such as variable!=value , are now interpreted as the new syntax. To restore the behavior, add a space after the exclamation mark, such as variable! =value . The ::= assignment operator defined by the POSIX standard has been added. When the .POSIX variable is specified, make observes the POSIX standard requirements for handling backslash and new line. In this mode, any trailing space before the backslash is preserved, and each backslash followed by a new line and white space characters is converted to a single space character. Behavior of the MAKEFLAGS and MFLAGS variables is now more precisely defined. A new variable, GNUMAKEFLAGS , is parsed for make flags identically to MAKEFLAGS . As a consequence, GNU make -specific flags can be stored outside MAKEFLAGS and portability of makefiles is increased. A new variable, MAKE_HOST , containing the host architecture has been added. The new variables, MAKE_TERMOUT and MAKE_TERMERR , indicate whether make is writing standard output and error to a terminal. Setting the -r and -R options in the MAKEFLAGS variable inside a makefile now works correctly and removes all built-in rules and variables, respectively. The .RECIPEPREFIX setting is now remembered per recipe. Additionally, variables expanded in that recipe also use that recipe prefix setting. The .RECIPEPREFIX setting and all target-specific variables are displayed in the output of the -p option as if in a makefile, instead of as comments. (BZ#1641015) SystemTap version 4.0 Red Hat Enterprise Linux 8 is distributed with the SystemTap instrumentation tool version 4.0. Notable improvements include: The extended Berkeley Packet Filter (eBPF) backend has been improved, especially strings and functions. To use this backend, start SystemTap with the --runtime=bpf option. A new export network service for use with the Prometheus monitoring system has been added. The system call probing implementation has been improved to use the kernel tracepoints if necessary. (BZ#1641032) Improvements in binutils version 2.30 Red Hat Enterprise Linux 8 includes version 2.30 of the binutils package. Notable improvements include: Support for new IBM Z architecture extensions has been improved. Linkers: The linker now puts code and read-only data into separate segments by default. As a result, the created executable files are bigger and more safe to run, because the dynamic loader can disable execution of any memory page containing read-only data. Support for GNU Property notes which provide hints to the dynamic loader about the binary file has been added. Previously, the linker generated invalid executable code for the Intel Indirect Branch Tracking (IBT) technology. As a consequence, the generated executable files could not start. This bug has been fixed. Previously, the gold linker merged property notes improperly. As a consequence, wrong hardware features could be enabled in the generated code, and the code could terminate unexpectedly. This bug has been fixed. Previously, the gold linker created note sections with padding bytes at the end to achieve alignment according to architecture. Because the dynamic loader did not expect the padding, it coud terminate unexpectedly the program it was loading. This bug has been fixed. Other tools: The readelf and objdump tools now have options to follow links into separate debug information files and display information in them, too. The new --inlines option extends the existing --line-numbers option of the objdump tool to display nesting information for inlined functions. The nm tool gained a new option --with-version-strings to display version information of a symbol after its name, if present. Support for the ARMv8-R architecture and Cortex-R52, Cortex-M23, and Cortex-M33 processors has been added to the assembler. (BZ#1641004, BZ#1637072, BZ#1501420, BZ#1504114, BZ#1614908, BZ#1614920) Performance Co-Pilot version 4.3.0 Red Hat Enterprise Linux 8 is distributed with Performance Co-Pilot (PCP) version 4.3.0. Notable improvements include: The pcp-dstat tool now includes historical analysis and Comma-separated Values (CSV) format output. The log utilities can use metric labels and help text records. The pmdaperfevent tool now reports the correct CPU numbers at the lower Simultaneous Multi Threading (SMT) levels. The pmdapostgresql tool now supports Postgres series 10.x. The pmdaredis tool now supports Redis series 5.x. The pmdabcc tool has been enhanced with dynamic process filtering and per-process syscalls, ucalls, and ustat. The pmdammv tool now exports metric labels, and the format version is increased to 3. The pmdagfs2 tool supports additional glock and glock holder metrics. Several fixes have been made to the SELinux policy. (BZ#1641034) Memory Protection Keys This update enables hardware features which allow per-thread page protection flag changes. The new glibc system call wrappers have been added for the pkey_alloc() , pkey_free() , and pkey_mprotect() functions. In addition, the pkey_set() and pkey_get() functions have been added to allow access to the per-thread protection flags. (BZ#1304448) GCC now defaults to z13 on IBM Z With this update, by default GCC on the IBM Z architecture builds code for the z13 processor, and the code is tuned for the z14 processor. This is equivalent to using the -march=z13 and -mtune=z14 options. Users can override this default by explicitly using options for target architecture and tuning. (BZ#1571124) elfutils updated to version 0.174 In Red Hat Enterprise Linux 8, the elfutils package is available in version 0.174. Notable changes include: Previously, the eu-readelf tool could show a variable with a negative value as if it had a large unsigned value, or show a large unsigned value as a negative value. This has been corrected and eu-readelf now looks up the size and signedness of constant value types to display them correctly. A new function dwarf_next_lines() for reading .debug_line data lacking CU has been added to the libdw library. This function can be used as alternative to the dwarf_getsrclines() and dwarf_getsrcfiles() functions. Previously, files with more than 65280 sections could cause errors in the libelf and libdw libraries and all tools using them. This bug has been fixed. As a result, extended shnum and shstrndx values in ELF file headers are handled correctly. (BZ#1641007) Valgrind updated to version 3.14 Red Hat Enterprise Linux 8 is distributed with the Valgrind executable code analysis tool version 3.14. Notable changes include: A new --keep-debuginfo option has been added to enable retention of debug info for unloaded code. As a result, saved stack traces can include file and line information for code that is no longer present in memory. Suppressions based on source file name and line number have been added. The Helgrind tool has been extended with an option --delta-stacktrace to specify computation of full history stack traces. Notably, using this option together with --history-level=full can improve Helgrind performance by up to 25%. False positive rate in the Memcheck tool for optimised code on the Intel and AMD 64-bit arcitectures and the ARM 64-bit architecture has been reduced. Note that you can use the --expensive-definedness-checks to control handling of definedness checks and improve the rate at the expense of performance. Valgrind can now recognize more instructions of the little-endian variant of IBM Power Systems. Valgrind can now process most of the integer and string vector instructions of the IBM Z architecture z13 processor. For more information about the new options and their known limitations, see the valgrind(1) manual page. (BZ#1641029, BZ#1501419) GDB version 8.2 Red Hat Enterprise Linux 8 is distributed with the GDB debugger version 8.2 Notable changes include: The IPv6 protocol is supported for remote debugging with GDB and gdbserver . Debugging without debug information has been improved. Symbol completion in the GDB user interface has been improved to offer better suggestions by using more syntactic constructions such as ABI tags or namespaces. Commands can now be executed in the background. Debugging programs created in the Rust programming language is now possible. Debugging C and C++ languages has been improved with parser support for the _Alignof and alignof operators, C++ rvalue references, and C99 variable-length automatic arrays. GDB extension scripts can now use the Guile scripting language. The Python scripting language interface for extensions has been improved with new API functions, frame decorators, filters, and unwinders. Additionally, scripts in the .debug_gdb_scripts section of GDB configuration are loaded automatically. GDB now uses Python version 3 to run its scripts, including pretty printers, frame decorators, filters, and unwinders. The ARM and 64-bit ARM architectures have been improved with process execution record and replay, including Thumb 32-bit and system call instructions. GDB now supports the Scalable Vector Extension (SVE) on the 64-bit ARM architecture. Support for Intel PKU register and Intel Processor Trace has been added. Record and replay functionality has been extended to include the rdrand and rdseed instructions on Intel based systems. Functionality of GDB on the IBM Z architecture has been extended with support for tracepoints and fast tracepoints, vector registers and ABI, and the Catch system call. Additionally, GDB now supports more recent instructions of the architecture. GDB can now use the SystemTap static user space probes (SDT) on the 64-bit ARM architecture. (BZ#1641022, BZ#1497096, BZ#1505346, BZ#1592332, BZ#1550502) glibc localization for RHEL is distributed in multiple packages In RHEL 8, glibc locales and translations are no longer provided by the single glibc-common package. Instead, every locale and language is available in a glibc-langpack- CODE package. Additionally, in most cases not all locales are installed by default, only these selected in the installer. Users must install all further locale packages that they need separately, or if they wish they can install glibc-all-langpacks to get the locales archive containing all the glibc locales installed as before. For more information, see Using langpacks . (BZ#1512009) GCC version 8.2 In Red Hat Enterprise Linux 8, the GCC toolchain is based on the GCC 8.2 release series. Notable changes include: Numerous general optimizations have been added, such as alias analysis, vectorizer improvements, identical code folding, inter-procedural analysis, store merging optimization pass, and others. The Address Sanitizer has been improved. The Leak Sanitizer and Undefined Behavior Sanitizer have been added. Debug information can now be produced in the DWARF5 format. This capability is experimental. The source code coverage analysis tool GCOV has been extended with various improvements. New warnings and improved diagnostics have been added for static detection of more programming errors. GCC has been extended to provide tools to ensure additional hardening of the generated code. Improvements related to security include built-ins for overflow checking, additional protection against stack clash, checking target addresses of control-flow instructions, warnings for bounded string manipulation functions, and warnings to detect out-of-bounds array indices. Improvements to architecture and processor support include: Multiple new architecture-specific options for the Intel AVX-512 architecture, a number of its microarchitectures, and Intel Software Guard Extensions (SGX) have been added. Code generation can now target the 64-bit ARM architecture LSE extensions, ARMv8.2-A 16-bit Floating-Point Extensions (FPE), and ARMv8.2-A, ARMv8.3-A, and ARMv8.4-A architecture versions. Support for the z13 and z14 processors of the IBM Z architecture has been added. Notable changes related to languages and standards include: The default standard used when compiling code in the C language has changed to C17 with GNU extensions. The default standard used when compiling code in the C++ language has changed to C++14 with GNU extensions. The C++ runtime library now supports the C++11 and C++14 standards. The C++ compiler now implements the C++14 standard. Support for the C language standard C11 has been improved. The new __auto_type GNU C extension provides a subset of the functionality of C++11 auto keyword in the C language. The _FloatN and _FloatNx type names specified by the ISO/IEC TS 18661-3:2015 standard are now recognized by the C front end. Passing an empty class as an argument now takes up no space on the Intel 64 and AMD64 architectures, as required by the platform ABI. The value returned by the C++11 alignof operator has been corrected to match the C _Alignof operator and return minimum alignment. To find the preferred alignment, use the GNU extension __alignof__ . The main version of the libgfortran library for Fortran language code has been changed to 5. Support for the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed. Use the Go Toolset for Go code development. (JIRA:RHELPLAN-7437, BZ#1512593, BZ#1512378) The Go cryptographic library FIPS mode now honors system settings Previously, the Go standard cryptographic library always used its FIPS mode unless it was explicitly disabled at build time of the application using the library. As a consequence, users of Go-based applications could not control whether the FIPS mode was used. With this change, the library does not default to FIPS mode when the system is not configured in FIPS mode. As a result, users of Go-based applications on RHEL systems have more control over the use of the FIPS mode of the Go cryptographic library. (BZ#1633351) strace updated to version 4.24 Red Hat Enterprise Linux 8 is distributed with the strace tool version 4.24. Notable changes include: System call tampering features have been added with the -e inject= option. This includes injection of errors, return values, delays, and signals. System call qualification syntax has been improved: The -e trace=/regex option has been added to filter system calls with regular expressions. Prepending a question mark to a system call qualification in the -e trace= option lets strace continue, even if the qualification does not match any system call. Personality designation has been added to system call qualifications in the -e trace option. Decoding of kvm vcpu exit reason has been added. To do so, use the -e kvm=vcpu option. The libdw library from elfutils is now used for stack unwinding when the -k option is used. Additionally, symbol demangling is performed using the libiberty library. Previously, the -r option caused strace to ignore the -t option. This has been fixed, and the two options are now independent. The -A option has been added for opening output files in append mode. The -X option has been added for configuring xlat output formatting. Decoding of socket addresses with the -yy option has been improved. Additionally, block and character device number printing in -yy mode has been added. It is now possible to trace both 64-bit and 32-bit binaries with a single strace tool on the IBM Z architecture. As a consequence, the separate strace32 package no longer exists in RHEL 8. Additionally, decoding of the following items has been added, improved or updated: netlink protocols, messages and attributes arch_prctl , bpf , getsockopt , io_pgetevent , keyctl , prctl , pkey_alloc , pkey_free , pkey_mprotect , ptrace , rseq , setsockopt , socket , statx and other system calls Multiple commands for the ioctl system call Constants of various types Path tracing for execveat , inotify_add_watch , inotify_init , select , symlink , symlinkat system calls and mmap system calls with indirect arguments Lists of signal codes (BZ#1641014) Compiler toolsets in RHEL 8 RHEL 8.0 provides the following compiler toolsets as Application Streams: Clang and LLVM Toolset 7.0.1, which provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis. See the Using Clang and LLVM Toolset document. Rust Toolset 1.31, which provides the Rust programming language compiler rustc , the cargo build tool and dependency manager, the cargo-vendor plugin, and required libraries. See the Using Rust Toolset document. Go Toolset 1.11.5, which provides the Go programming language tools and libraries. Go is alternatively known as golang . See the Using Go Toolset document. ( BZ#1695698 , BZ#1613515, BZ#1613516, BZ#1613518) Java implementations and Java tools in RHEL 8 The RHEL 8 AppStream repository includes: The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and the OpenJDK 11 Java Software Development Kit. The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment and the OpenJDK 8 Java Software Development Kit. The icedtea-web packages, which provide an implementation of Java Web Start. The ant module, providing a Java library and command-line tool for compiling, assembling, testing, and running Java applications. Ant has been updated to version 1.10. The maven module, providing a software project management and comprehension tool. Maven was previously available only as a Software Collection or in the unsupported Optional channel. The scala module, providing a general purpose programming language for the Java platform. Scala was previously available only as a Software Collection. In addition, the java-1.8.0-ibm packages are distributed through the Supplementary repository. Note that packages in this repository are unsupported by Red Hat. (BZ#1699535) C++ ABI change in std::string and std::list The Application Binary Interface (ABI) of the std::string and std::list classes from the libstdc++ library changed between RHEL 7 (GCC 4.8) and RHEL 8 (GCC 8) to conform to the C++11 standard. The libstdc++ library supports both the old and new ABI, but some other C++ system libraries do not. As a consequence, applications that dynamically link against these libraries will need to be rebuilt. This affects all C++ standard modes, including C++98. It also affects applications built with Red Hat Developer Toolset compilers for RHEL 7, which kept the old ABI to maintain compatibility with the system libraries. (BZ#1704867) 5.1.12. File systems and storage Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is supported on configurations where the hardware vendor has qualified it and provides full support for the particular host bus adapter (HBA) and storage array configuration on RHEL. DIF/DIX is not supported on the following configurations: It is not supported for use on the boot device. It is not supported on virtualized guests. Red Hat does not support using the Automatic Storage Management library (ASMLib) when DIF/DIX is enabled. DIF/DIX is enabled or disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493) XFS now supports shared copy-on-write data extents The XFS file system supports shared copy-on-write data extent functionality. This feature enables two or more files to share a common set of data blocks. When either of the files sharing common blocks changes, XFS breaks the link to common blocks and creates a new file. This is similar to the copy-on-write (COW) functionality found in other file systems. Shared copy-on-write data extents are: Fast Creating shared copies does not utilize disk I/O. Space-efficient Shared blocks do not consume additional disk space. Transparent Files sharing common blocks act like regular files. Userspace utilities can use shared copy-on-write data extents for: Efficient file cloning, such as with the cp --reflink command Per-file snapshots This functionality is also used by kernel subsystems such as Overlayfs and NFS for more efficient operation. Shared copy-on-write data extents are now enabled by default when creating an XFS file system, starting with the xfsprogs package version 4.17.0-2.el8 . Note that Direct Access (DAX) devices currently do not support XFS with shared copy-on-write data extents. To create an XFS file system without this feature, use the following command: Red Hat Enterprise Linux 7 can mount XFS file systems with shared copy-on-write data extents only in the read-only mode. (BZ#1494028) Maximum XFS file system size is 1024 TiB The maximum supported size of an XFS file system has been increased from 500 TiB to 1024 TiB. File systems larger than 500 TiB require that: the metadata CRC feature and the free inode btree feature are both enabled in the file system format, and the allocation group size is at least 512 GiB. In RHEL 8, the mkfs.xfs utility creates file systems that meet these requirements by default. Growing a smaller file system that does not meet these requirements to a new size greater than 500 TiB is not supported. (BZ#1563617) ext4 file system now supports metadata checksum With this update, ext4 metadata is protected by checksums . This enables the file system to recognize the corrupt metadata, which avoids damage and increases the file system resilience. ( BZ#1695584 ) VDO now supports all architectures Virtual Data Optimizer (VDO) is now available on all of the architectures supported by RHEL 8. For the list of supported architectures, see Chapter 2, Architectures . (BZ#1534087) The BOOM boot manager simplifies the process of creating boot entries BOOM is a boot manager for Linux systems that use boot loaders supporting the BootLoader Specification for boot entry configuration. It enables flexible boot configuration and simplifies the creation of new or modified boot entries: for example, to boot snapshot images of the system created using LVM. BOOM does not modify the existing boot loader configuration, and only inserts additional entries. The existing configuration is maintained, and any distribution integration, such as kernel installation and update scripts, continue to function as before. BOOM has a simplified command-line interface (CLI) and API that ease the task of creating boot entries. (BZ#1649582) LUKS2 is now the default format for encrypting volumes In RHEL 8, the LUKS version 2 (LUKS2) format replaces the legacy LUKS (LUKS1) format. The dm-crypt subsystem and the cryptsetup tool now uses LUKS2 as the default format for encrypted volumes. LUKS2 provides encrypted volumes with metadata redundancy and auto-recovery in case of a partial metadata corruption event. Due to the internal flexible layout, LUKS2 is also an enabler of future features. It supports auto-unlocking through the generic kernel-keyring token built in libcryptsetup that allow users unlocking of LUKS2 volumes using a passphrase stored in the kernel-keyring retention service. Other notable enhancements include: The protected key setup using the wrapped key cipher scheme. Easier integration with Policy-Based Decryption (Clevis). Up to 32 key slots - LUKS1 provides only 8 key slots. For more details, see the cryptsetup(8) and cryptsetup-reencrypt(8) man pages. (BZ#1564540) NVMe/FC is fully supported on Broadcom Emulex and Marvell Qlogic Fibre Channel adapters The NVMe over Fibre Channel (NVMe/FC) transport type is now fully supported in Initiator mode when used with Broadcom Emulex and Marvell Qlogic Fibre Channel 32Gbit adapters that feature NVMe support. NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. Enabling NVMe/FC: To enable NVMe/FC in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add the following option: To enable NVMe/FC in the qla2xxx driver, edit the /etc/modprobe.d/qla2xxx.conf file and add the following option: Additional restrictions: Multipath is not supported with NVMe/FC. NVMe clustering is not supported with NVMe/FC. kdump is not supported with NVMe/FC. Booting from Storage Area Network (SAN) NVMe/FC is not supported. (BZ#1649497) New scan_lvs configuration setting A new lvm.conf configuration file setting, scan_lvs , has been added and set to 0 by default. The new default behavior stops LVM from looking for PVs that may exist on top of LVs; that is, it will not scan active LVs for more PVs. The default setting also prevents LVM from creating PVs on top of LVs. Layering PVs on top of LVs can occur by way of VM images placed on top of LVs, in which case it is not safe for the host to access the PVs. Avoiding this unsafe access is the primary reason for the new default behavior. Also, in environments with many active LVs, the amount of device scanning done by LVM can be significantly decreased. The behavior can be restored by changing this setting to 1. ( BZ#1676598 ) New overrides section of the DM Multipath configuration file The /etc/multipath.conf file now includes an overrides section that allows you to set a configuration value for all of your devices. These attributes are used by DM Multipath for all devices unless they are overwritten by the attributes specified in the multipaths section of the /etc/multipath.conf file for paths that contain the device. This functionality replaces the all_devs parameter of the devices section of the configuration file, which is no longer supported. (BZ#1643294) Installing and booting from NVDIMM devices is now supported Prior to this update, Nonvolatile Dual Inline Memory Module (NVDIMM) devices in any mode were ignored by the installer. With this update, kernel improvements to support NVDIMM devices provide improved system performance capabilities and enhanced file system access for write-intensive applications like database or analytic workloads, as well as reduced CPU overhead. This update introduces support for: The use of NVDIMM devices for installation using the nvdimm Kickstart command and the GUI, making it possible to install and boot from NVDIMM devices in sector mode and reconfigure NVDIMM devices into sector mode during installation. The extension of Kickstart scripts for Anaconda with commands for handling NVDIMM devices. The ability of grub2 , efibootmgr , and efivar system components to handle and boot from NVDIMM devices. (BZ#1499442) The detection of marginal paths in DM Multipath has been improved The multipathd service now supports improved detection of marginal paths. This helps multipath devices avoid paths that are likely to fail repeatedly, and improves performance. Marginal paths are paths with persistent but intermittent I/O errors. The following options in the /etc/multipath.conf file control marginal paths behavior: marginal_path_double_failed_time , marginal_path_err_sample_time , marginal_path_err_rate_threshold , and marginal_path_err_recheck_gap_time . DM Multipath disables a path and tests it with repeated I/O for the configured sample time if: the listed multipath.conf options are set, a path fails twice in the configured time, and other paths are available. If the path has more than the configured err rate during this testing, DM Multipath ignores it for the configured gap time, and then retests it to see if it is working well enough to be reinstated. For more information, see the multipath.conf man page. (BZ#1643550) Multiqueue scheduling on block devices Block devices now use multiqueue scheduling in Red Hat Enterprise Linux 8. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems. The traditional schedulers, which were available in RHEL 7 and earlier versions, have been removed. RHEL 8 supports only multiqueue schedulers. (BZ#1647612) 5.1.13. High availability and clusters New pcs commands to list available watchdog devices and test watchdog devices In order to configure SBD with Pacemaker, a functioning watchdog device is required. This release supports the pcs stonith sbd watchdog list command to list available watchdog devices on the local node, and the pcs stonith sbd watchdog test command to test a watchdog device. For information on the sbd command line tool, see the sbd (8) man page. (BZ#1578891) The pcs command now supports filtering resource failures by an operation and its interval Pacemaker now tracks resource failures per a resource operation on top of a resource name, and a node. The pcs resource failcount show command now allows filtering failures by a resource, node, operation, and interval. It provides an option to display failures aggregated per a resource and node or detailed per a resource, node, operation, and its interval. Additionally, the pcs resource cleanup command now allows filtering failures by a resource, node, operation, and interval. (BZ#1591308) Timestamps enabled in corosync log The corosync log did not previously contain timestamps, which made it difficult to relate it to logs from other nodes and daemons. With this release, timestamps are present in the corosync log. (BZ#1615420) New formats for pcs cluster setup , pcs cluster node add and pcs cluster node remove commands In Red Hat Enterprise Linux 8, pcs fully supports Corosync 3, knet , and node names. Node names are now required and replace node addresses in the role of node identifier. Node addresses are now optional. In the pcs host auth command, node addresses default to node names. In the pcs cluster setup and pcs cluster node add commands, node addresses default to the node addresses specified in the pcs host auth command. With these changes, the formats for the commands to set up a cluster, add a node to a cluster, and remove a node from a cluster have changed. For information on these new command formats, see the help display for the pcs cluster setup , pcs cluster node add and pcs cluster node remove commands. (BZ#1158816) New pcs commands Red Hat Enterprise Linux 8 introduces the following new commands. RHEL 8 introduces a new command, pcs cluster node add-guest | remove-guest , which replaces the pcs cluster remote-node add | remove command in RHEL 7. RHEL 8 introduces a new command, pcs quorum unblock , which replaces the pcs cluster quorum unblock command in RHEL 7. The pcs resource failcount reset command has been removed as it duplicates the functionality of the pcs resource cleanup command. RHEL 8 introduces new commands which replace the pcs resource [show] command in RHEL 7: The pcs resource [status] command in RHEL 8 replaces the pcs resource [show] command in RHEL 7. The pcs resource config command in RHEL 8 replaces the pcs resource [show] --full command in RHEL 7. The pcs resource config resource id command in RHEL 8 replaces the pcs resource show resource id command in RHEL 7. RHEL 8 introduces new commands which replace the pcs stonith [show] command in RHEL 7: The pcs stonith [status] command in RHEL 8 replaces the pcs stonith [show] command in RHEL 7. The pcs stonith config command in RHEL 8 replaces the pcs stonith [show] --full command in RHEL 7. The pcs stonith config resource id command in RHEL 8 replaces the pcs stonith show resource id command in RHEL 7. (BZ#1654280) Pacemaker 2.0.0 in RHEL 8 The pacemaker packages have been upgraded to the upstream version of Pacemaker 2.0.0, which provides a number of bug fixes and enhancements over the version: The Pacemaker detail log is now /var/log/pacemaker/pacemaker.log by default (not directly in /var/log or combined with the corosync log under /var/log/cluster ). The Pacemaker daemon processes have been renamed to make reading the logs more intuitive. For example, pengine has been renamed to pacemaker-schedulerd . Support for the deprecated default-resource-stickiness and is-managed-default cluster properties has been dropped. The resource-stickiness and is-managed properties should be set in resource defaults instead. Existing configurations (though not newly created ones) with the deprecated syntax will automatically be updated to use the supported syntax. For a more complete list of changes, see Pacemaker 2.0 upgrade in Red Hat Enterprise Linux 8 . It is recommended that users who are upgrading an existing cluster using Red Hat Enterprise Linux 7 or earlier, run pcs cluster cib-upgrade on any cluster node before and after upgrading RHEL on all cluster nodes. ( BZ#1543494 ) Master resources renamed to promotable clone resources Red Hat Enterprise Linux (RHEL) 8 supports Pacemaker 2.0, in which a master/slave resource is no longer a separate type of resource but a standard clone resource with a promotable meta-attribute set to true . The following changes have been implemented in support of this update: It is no longer possible to create master resources with the pcs command. Instead, it is possible to create promotable clone resources. Related keywords and commands have been changed from master to promotable . All existing master resources are displayed as promotable clone resources. When managing a RHEL7 cluster in the Web UI, master resources are still called master, as RHEL7 clusters do not support promotable clones. (BZ#1542288) New commands for authenticating nodes in a cluster Red Hat Enterprise Linux (RHEL) 8 incorporates the following changes to the commands used to authenticate nodes in a cluster. The new command for authentication is pcs host auth . This command allows users to specify host names, addresses and pcsd ports. The pcs cluster auth command authenticates only the nodes in a local cluster and does not accept a node list It is now possible to specify an address for each node. pcs / pcsd will then communicate with each node using the specified address. These addresses can be different than the ones corosync uses internally. The pcs pcsd clear-auth command has been replaced by the pcs pcsd deauth and pcs host deauth commands. The new commands allow users to deauthenticate a single host as well as all hosts. Previously, node authentication was bidirectional, and running the pcs cluster auth command caused all specified nodes to be authenticated against each other. The pcs host auth command, however, causes only the local host to be authenticated against the specified nodes. This allows better control of what node is authenticated against what other nodes when running this command. On cluster setup itself, and also when adding a node, pcs automatically synchronizes tokens on the cluster, so all nodes in the cluster are still automatically authenticated as before and the cluster nodes can communicate with each other. Note that these changes are not backward compatible. Nodes that were authenticated on a RHEL 7 system will need to be authenticated again. (BZ#1549535) The pcs commands now support display, cleanup, and synchronization of fencing history Pacemaker's fence daemon tracks a history of all fence actions taken (pending, successful, and failed). With this release, the pcs commands allow users to access the fencing history in the following ways: The pcs status command shows failed and pending fencing actions The pcs status --full command shows the entire fencing history The pcs stonith history command provides options to display and clean up fencing history Although fencing history is synchronized automatically, the pcs stonith history command now supports an update option that allows a user to manually synchronize fencing history should that be necessary (BZ#1620190, BZ#1615891) 5.1.14. Networking nftables replaces iptables as the default network packet filtering framework The nftables framework provides packet classification facilities and it is the designated successor to the iptables , ip6tables , arptables , and ebtables tools. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: lookup tables instead of linear processing a single framework for both the IPv4 and IPv6 protocols rules all applied atomically instead of fetching, updating, and storing a complete ruleset support for debugging and tracing in the ruleset ( nftrace ) and monitoring trace events (in the nft tool) more consistent and compact syntax, no protocol-specific extensions a Netlink API for third-party applications Similarly to iptables , nftables use tables for storing chains. The chains contain individual rules for performing actions. The nft tool replaces all tools from the packet-filtering frameworks. The libnftables library can be used for low-level interaction with nftables Netlink API over the libmnl library. The iptables , ip6tables , ebtables and arptables tools are replaced by nftables-based drop-in replacements with the same name. While external behavior is identical to their legacy counterparts, internally they use nftables with legacy netfilter kernel modules through a compatibility interface where required. Effect of the modules on the nftables ruleset can be observed using the nft list ruleset command. Since these tools add tables, chains, and rules to the nftables ruleset, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly separate legacy commands. To quickly identify which variant of the tool is present, version information has been updated to include the back-end name. In RHEL 8, the nftables-based iptables tool prints the following version string: For comparison, the following version information is printed if legacy iptables tool is present: (BZ#1644030) Notable TCP features in RHEL 8 Red Hat Enterprise Linux 8 is distributed with TCP networking stack version 4.18, which provides higher performances, better scalability, and more stability. Performances are boosted especially for busy TCP server with a high ingress connection rate. Additionally, two new TCP congestion algorithms, BBR and NV , are available, offering lower latency, and better throughput than cubic in most scenarios. (BZ#1562998) firewalld uses nftables by default With this update, the nftables filtering subsystem is the default firewall backend for the firewalld daemon. To change the backend, use the FirewallBackend option in the /etc/firewalld/firewalld.conf file. This change introduces the following differences in behavior when using nftables : iptables rule executions always occur before firewalld rules DROP in iptables means a packet is never seen by firewalld ACCEPT in iptables means a packet is still subject to firewalld rules firewalld direct rules are still implemented through iptables while other firewalld features use nftables direct rule execution occurs before firewalld generic acceptance of established connections (BZ#1509026) Notable change in wpa_supplicant in RHEL 8 In Red Hat Enterprise Linux (RHEL) 8, the wpa_supplicant package is built with CONFIG_DEBUG_SYSLOG enabled. This allows reading the wpa_supplicant log using the journalctl utility instead of checking the contents of the /var/log/wpa_supplicant.log file. (BZ#1582538) NetworkManager now supports SR-IOV virtual functions In Red Hat Enterprise Linux 8.0, NetworkManager allows configuring the number of virtual functions (VF) for interfaces that support single-root I/O virtualization (SR-IOV). Additionally, NetworkManager allows configuring some attributes of the VFs, such as the MAC address, VLAN, the spoof checking setting and allowed bitrates. Note that all properties related to SR-IOV are available in the sriov connection setting. For more details, see the nm-settings(5) man page. (BZ#1555013) IPVLAN virtual network drivers are now supported In Red Hat Enterprise Linux 8.0, the kernel includes support for IPVLAN virtual network drivers. With this update, IPVLAN virtual Network Interface Cards (NICs) enable the network connectivity for multiple containers exposing a single MAC address to the local network. This allows a single host to have a lot of containers overcoming the possible limitation on the number of MAC addresses supported by the peer networking equipment. (BZ#1261167) NetworkManager supports a wildcard interface name match for connections Previously, it was possible to restrict a connection to a given interface using only an exact match on the interface name. With this update, connections have a new match.interface-name property which supports wildcards. This update enables users to choose the interface for a connection in a more flexible way using a wildcard pattern. (BZ#1555012) Improvements in the networking stack 4.18 Red Hat Enterprise Linux 8.0 includes the networking stack upgraded to upstream version 4.18, which provides several bug fixes and enhancements. Notable changes include: Introduced new offload features, such as UDP_GSO , and, for some device drivers, GRO_HW . Improved significant scalability for the User Datagram Protocol (UDP). Improved the generic busy polling code. Improved scalability for the IPv6 protocol. Improved scalability for the routing code. Added a new default transmit queue scheduling algorithm, fq_codel , which improves a transmission delay. Improved scalability for some transmit queue scheduling algorithms. For example, pfifo_fast is now lockless. Improved scalability of the IP reassembly unit by removing the garbage collection kernel thread and ip fragments expire only on timeout. As a result, CPU usage under DoS is much lower, and the maximum sustainable fragments drop rate is limited by the amount of memory configured for the IP reassembly unit. (BZ#1562987) New tools to convert iptables to nftables This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables or ip6tables rules into the equivalent ones for nftables . Note that some extensions lack translation support. If such an extension exists, the tool prints the untranslated rule prefixed with the # sign. For example: Additionally, users can use the iptables-restore-translate and ip6tables-restore-translate tools to translate a dump of rules. Note that before that, users can use the iptables-save or ip6tables-save commands to print a dump of current rules. For example: (BZ#1564596) New features added to VPN using NetworkManager In Red Hat Enterprise Linux 8.0, NetworkManager provides the following new features to VPN: Support for the Internet Key Exchange version 2 (IKEv2) protocol. Added some more Libreswan options, such as the rightid , leftcert , narrowing , rekey , fragmentation options. For more details on the supported options, see the nm-settings-libreswan man page. Updated the default ciphers. This means that when the user does not specify the ciphers, the NetworkManager-libreswan plugin allows the Libreswan application to choose the system default cipher. The only exception is when the user selects an IKEv1 aggressive mode configuration. In this case, the ike = aes256-sha1;modp1536 and eps = aes256-sha1 values are passed to Libreswan . (BZ#1557035) A new data chunk type, I-DATA , added to SCTP This update adds a new data chunk type, I-DATA , and stream schedulers to the Stream Control Transmission Protocol (SCTP). Previously, SCTP sent user messages in the same order as they were sent by a user. Consequently, a large SCTP user message blocked all other messages in any stream until completely sent. When using I-DATA chunks, the Transmission Sequence Number (TSN) field is not overloaded. As a result, SCTP now can schedule the streams in different ways, and I-DATA allows user messages interleaving (RFC 8260). Note that both peers must support the I-DATA chunk type. (BZ#1273139) NetworkManager supports configuring ethtool offload features With this enhancement, NetworkManager supports configuring ethtool offload features, and users no longer need to use init scripts or a NetworkManager dispatcher script. As a result, users can now configure the offload feature as a part of the connection profile using one of the following methods: By using the nmcli utility By editing key files in the /etc/NetworkManager/system-connections/ directory By editing the /etc/sysconfig/network-scripts/ifcfg-* files Note that this feature is currently not supported in graphical interfaces and in the nmtui utility. (BZ#1335409) TCP BBR support in RHEL 8 A new TCP congestion control algorithm, Bottleneck Bandwidth and Round-trip time (BBR) is now supported in Red Hat Enterprise Linux (RHEL) 8. BBR attempts to determine the bandwidth of the bottleneck link and the Round-trip time (RTT). Most congestion algorithms are based on packet loss (including CUBIC, the default Linux TCP congestion control algorithm), which have problems on high-throughput links. BBR does not react to loss events directly, it adjusts the TCP pacing rate to match it with the available bandwidth. Users of TCP BBR should switch to the fq queueing setting on all the involved interfaces. Note that users should explicitly use fq and not fq_codel . For more details, see the tc-fq man page. (BZ#1515987) lksctp-tools , version 1.0.18 in RHEL 8 The lksctp-tools package, version 3.28 is available in Red Hat Enterprise Linux (RHEL) 8. Notable enhancements and bug fixes include: Integration with Travis CI and Coverity Scan Support for the sctp_peeloff_flags function Indication of which kernel features are available Fixes on Coverity Scan issues (BZ#1568622) Blacklisting SCTP module by default in RHEL 8 To increase security, a set of kernel modules have been moved to the kernel-modules-extra package. These are not installed by default. As a consequence, non-root users cannot load these components as they are blacklisted by default. To use one of these kernel modules, the system administrator must install kernel-modules-extra and explicitly remove the module blacklist. As a result, non-root users will be able to load the software component automatically. (BZ#1642795) Notable changes in driverctl 0.101 Red Hat Enterprise Linux 8.0 is distributed with driverctl 0.101. This version includes the following bug fixes: The shellcheck warnings have been fixed. The bash-completion is installed as driverctl instead of driverctl-bash-completion.sh . The load_override function for non-PCI buses has been fixed. The driverctl service loads all overrides before it reaches the basic.target systemd target. (BZ#1648411) Added rich rules priorities to firewalld The priority option has been added to rich rules. This allows users to define the desirable priority order during the rule execution and provides more advanced control over rich rules. (BZ#1648497) NVMe over RDMA is supported in RHEL 8 In Red Hat Enterprise Linux (RHEL) 8, Nonvolatile Memory Express (NVMe) over Remote Direct Memory Access (RDMA) supports Infiniband, RoCEv2, and iWARP only in initiator mode. Note that Multipath is supported in failover mode only. Additional restrictions: Kdump is not supported with NVMe/RDMA. Booting from NVMe device over RDMA is not supported. ( BZ#1680177 ) The nf_tables back end does not support debugging using dmesg Red Hat Enterprise Linux 8.0 uses the nf_tables back end for firewalls that does not support debugging the firewall using the output of the dmesg utility. To debug firewall rules, use the xtables-monitor -t or nft monitor trace commands to decode rule evaluation events. (BZ#1645744) Red Hat Enterprise Linux supports VRF The kernel in RHEL 8.0 supports virtual routing and forwarding (VRF). VRF devices, combined with rules set using the ip utility, enable administrators to create VRF domains in the Linux network stack. These domains isolate the traffic on layer 3 and, therefore, the administrator can create different routing tables and reuse the same IP addresses within different VRF domains on one host. (BZ#1440031) iproute , version 4.18 in RHEL 8 The iproute package is distributed with the version 4.18 in Red Hat Enterprise Linux (RHEL) 8. The most notable change is that the interface alias marked as ethX:Y, such as eth0:1, is no longer supported. To work around this problem, users should remove the alias suffix, which is the colon and the following number before entering ip link show . (BZ#1589317) 5.1.15. Security SWID tag of the RHEL 8.0 release To enable identification of RHEL 8.0 installations using the ISO/IEC 19770-2:2015 mechanism, software identification (SWID) tags are installed in files /usr/lib/swidtag/redhat.com/com.redhat.RHEL-8-<architecture>.swidtag and /usr/lib/swidtag/redhat.com/com.redhat.RHEL-8.0-<architecture>.swidtag . The parent directory of these tags can also be found by following the /etc/swid/swidtags.d/redhat.com symbolic link. The XML signature of the SWID tag files can be verified using the xmlsec1 verify command, for example: The certificate of the code signing certification authority can also be obtained from the Product Signing Keys page on the Customer Portal. (BZ#1636338) System-wide cryptographic policies are applied by default Crypto-policies is a component in Red Hat Enterprise Linux 8, which configures the core cryptographic subsystems, covering the TLS, IPsec, DNSSEC, Kerberos, and SSH protocols. It provides a small set of policies, which the administrator can select using the update-crypto-policies command. The DEFAULT system-wide cryptographic policy offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if larger than 2047 bits. See the Consistent security by crypto policies in Red Hat Enterprise Linux 8 article on the Red Hat Blog and the update-crypto-policies(8) man page for more information. (BZ#1591620) OpenSSH rebased to version 7.8p1 The openssh packages have been upgraded to upstream version 7.8p1. Notable changes include: Removed support for the SSH version 1 protocol. Removed support for the hmac-ripemd160 message authentication code. Removed support for RC4 ( arcfour ) ciphers. Removed support for Blowfish ciphers. Removed support for CAST ciphers. Changed the default value of the UseDNS option to no . Disabled DSA public key algorithms by default. Changed the minimal modulus size for Diffie-Hellman parameters to 2048 bits. Changed semantics of the ExposeAuthInfo configuration option. The UsePrivilegeSeparation=sandbox option is now mandatory and cannot be disabled. Set the minimal accepted RSA key size to 1024 bits. (BZ#1622511) The automatic OpenSSH server keys generation is now handled by [email protected] OpenSSH creates RSA, ECDSA, and ED25519 server host keys automatically if they are missing. To configure the host key creation in RHEL 8, use the [email protected] instantiated service. For example, to disable the automatic creation of the RSA key type: See the /etc/sysconfig/sshd file for more information. (BZ#1228088) ECDSA keys are supported for SSH authentication This release of the OpenSSH suite introduces support for ECDSA keys stored on PKCS #11 smart cards. As a result, users can now use both RSA and ECDSA keys for SSH authentication. (BZ#1645038) libssh implements SSH as a core cryptographic component This change introduces libssh as a core cryptographic component in Red Hat Enterprise Linux 8. The libssh library implements the Secure Shell (SSH) protocol. Note that the client side of libssh follows the configuration set for OpenSSH through system-wide crypto policies, but the configuration of the server side cannot be changed through system-wide crypto policies. (BZ#1485241) TLS 1.3 support in cryptographic libraries This update enables Transport Layer Security (TLS) 1.3 by default in all major back-end crypto libraries. This enables low latency across the operating system communications layer and enhances privacy and security for applications by taking advantage of new algorithms, such as RSA-PSS or X25519. (BZ#1516728) NSS now use SQL by default The Network Security Services (NSS) libraries now use the SQL file format for the trust database by default. The DBM file format, which was used as a default database format in releases, does not support concurrent access to the same database by multiple processes and it has been deprecated in upstream. As a result, applications that use the NSS trust database to store keys, certificates, and revocation information now create databases in the SQL format by default. Attempts to create databases in the legacy DBM format fail. The existing DBM databases are opened in read-only mode, and they are automatically converted to the SQL format. Note that NSS support the SQL file format since Red Hat Enterprise Linux 6. (BZ#1489094) PKCS #11 support for smart cards and HSMs is now consistent across the system With this update, using smart cards and Hardware Security Modules (HSM) with PKCS #11 cryptographic token interface becomes consistent. This means that the user and the administrator can use the same syntax for all related tools in the system. Notable enhancements include: Support for the PKCS #11 Uniform Resource Identifier (URI) scheme that ensures a simplified enablement of tokens on RHEL servers both for administrators and application writers. A system-wide registration method for smart cards and HSMs using the pkcs11.conf . Consistent support for HSMs and smart cards is available in NSS, GnuTLS, and OpenSSL (through the openssl-pkcs11 engine) applications. The Apache HTTP server ( httpd ) now seamlessly supports HSMs. For more information, see the pkcs11.conf(5) man page. (BZ#1516741) Firefox now works with system-wide registered PKCS #11 drivers The Firefox web browser automatically loads the p11-kit-proxy module and every smart card that is registered system-wide in p11-kit through the pkcs11.conf file is automatically detected. For using TLS client authentication, no additional setup is required and keys from a smart card are automatically used when a server requests them. (BZ#1595638) RSA-PSS is now supported in OpenSC This update adds support for the RSA-PSS cryptographic signature scheme to the OpenSC smart card driver. The new scheme enables a secure cryptographic algorithm required for the TLS 1.3 support in the client software. (BZ#1595626) Notable changes in Libreswan in RHEL 8 The libreswan packages have been upgraded to upstream version 3.27, which provides many bug fixes and enhancements over the versions. Most notable changes include: Support for RSA-PSS (RFC 7427) through authby=rsa-sha2 , ECDSA (RFC 7427) through authby=ecdsa-sha2 , CURVE25519 using the dh31 keyword, and CHACHA20-POLY1305 for IKE and ESP through the chacha20_poly1305 encryption keyword has been added for the IKEv2 protocol. Support for the alternative KLIPS kernel module has been removed from Libreswan , as upstream has deprecated KLIPS entirely. The Diffie-Hellman groups DH22, DH23, and DH24 are no longer supported (as per RFC 8247). Note that the authby=rsasig has been changed to always use the RSA v1.5 method, and the authby=rsa-sha2 option uses the RSASSA-PSS method. The authby=rsa-sha1 option is not valid as per RFC 8247. That is the reason Libreswan no longer supports SHA-1 with digital signatures. (BZ#1566574) System-wide cryptographic policies change the default IKE version in Libreswan to IKEv2 The default IKE version in the Libreswan IPsec implementation has been changed from IKEv1 (RFC 2409) to IKEv2 (RFC 7296). The default IKE and ESP/AH algorithms for use with IPsec have been updated to comply with system-wide crypto policies, RFC 8221, and RFC 8247. Encryption key sizes of 256 bits are now preferred over key sizes of 128 bits. The default IKE and ESP/AH ciphers now include AES-GCM, CHACHA20POLY1305, and AES-CBC for encryption. For integrity checking, they provide AEAD and SHA-2. The Diffie-Hellman groups now contain DH19, DH20, DH21, DH14, DH15, DH16, and DH18. The following algorithms have been removed from the default IKE and ESP/AH policies: AES_CTR, 3DES, SHA1, DH2, DH5, DH22, DH23, and DH24. With the exceptions of DH22, DH23, and DH24, these algorithms can be enabled by the ike= or phase2alg=/esp=/ah= option in IPsec configuration files. To configure IPsec VPN connections that still require the IKEv1 protocol, add the ikev2=no option to connection configuration files. See the ipsec.conf(5) man page for more information. (BZ#1645606) IKE version-related changes in Libreswan With this enhancement, Libreswan handles internet key exchange (IKE) settings differently: The default internet key exchange (IKE) version has been changed from 1 to 2. Connections can now either use the IKEv1 or IKEv2 protocol, but not both. The interpretation of the ikev2 option has been changed: The values insist is interpreted as IKEv2-only. The values no and never are interpreted as IKEv1-only. The values propose , yes and, permit are no longer valid and result in an error, because it was not clear which IKE versions resulted from these values (BZ#1648776) New features in OpenSCAP in RHEL 8 The OpenSCAP suite has been upgraded to upstream version 1.3.0, which introduces many enhancements over the versions. The most notable features include: API and ABI have been consolidated - updated, deprecated and/or unused symbols have been removed. The probes are not run as independent processes, but as threads within the oscap process. The command-line interface has been updated. Python 2 bindings have been replaced with Python 3 bindings. (BZ#1614273) SCAP Security Guide now supports system-wide cryptographic policies The scap-security-guide packages have been updated to use predefined system-wide cryptographic policies for configuring the core cryptographic subsystems. The security content that conflicted with or overrode the system-wide cryptographic policies has been removed. Note that this change applies only on the security content in scap-security-guide , and you do not need to update the OpenSCAP scanner or other SCAP components. (BZ#1618505) OpenSCAP command-line interface has been improved The verbose mode is now available in all oscap modules and submodules. The tool output has improved formatting. Deprecated options have been removed to improve the usability of the command-line interface. The following options are no longer available: --show in oscap xccdf generate report has been completely removed. --probe-root in oscap oval eval has been removed. It can be replaced by setting the environment variable, OSCAP_PROBE_ROOT . --sce-results in oscap xccdf eval has been replaced by --check-engine-results validate-xml submodule has been dropped from CPE, OVAL, and XCCDF modules. validate submodules can be used instead to validate SCAP content against XML schemas and XSD schematrons. oscap oval list-probes command has been removed, the list of available probes can be displayed using oscap --version instead. OpenSCAP allows to evaluate all rules in a given XCCDF benchmark regardless of the profile by using --profile '(all)' . (BZ#1618484) SCAP Security Guide PCI-DSS profile aligns with version 3.2.1 The scap-security-guide packages provide the PCI-DSS (Payment Card Industry Data Security Standard) profile for Red Hat Enterprise Linux 8 and this profile has been updated to align with the latest PCI-DSS version - 3.2.1. (BZ#1618528) SCAP Security Guide supports OSPP 4.2 The scap-security-guide packages provide a draft of the OSPP (Protection Profile for General Purpose Operating Systems) profile version 4.2 for Red Hat Enterprise Linux 8. This profile reflects mandatory configuration controls identified in the NIAP Configuration Annex to the Protection Profile for General Purpose Operating Systems (Protection Profile Version 4.2). SCAP Security Guide provides automated checks and scripts that help users to meet requirements defined in the OSPP. (BZ#1618518) Notable changes in rsyslog in RHEL 8 The rsyslog packages have been upgraded to upstream version 8.37.0, which provides many bug fixes and enhancements over the versions. Most notable changes include: Enhanced processing of rsyslog internal messages; possibility of rate-limiting them; fixed possible deadlock. Enhanced rate-limiting in general; the actual spam source is now logged. Improved handling of oversized messages - the user can now set how to treat them both in the core and in certain modules with separate actions. mmnormalize rule bases can now be embedded in the config file instead of creating separate files for them. All config variables, including variables in JSON, are now case-insensitive. Various improvements of PostgreSQL output. Added a possibility to use shell variables to control config processing, such as conditional loading of additional configuration files, executing statements, or including a text in config . Note that an excessive use of this feature can make it very hard to debug problems with rsyslog . 4-digit file creation modes can be now specified in config . Reliable Event Logging Protocol (RELP) input can now bind also only on a specified address. The default value of the enable.body option of mail output is now aligned to documentation The user can now specify insertion error codes that should be ignored in MongoDB output. Parallel TCP (pTCP) input has now the configurable backlog for better load-balancing. To avoid duplicate records that might appear when journald rotated its files, the imjournal option has been added. Note that use of this option can affect performance. Note that the system with rsyslog can be configured to provide better performance as described in the Configuring system logging without journald or with minimized journald usage Knowledgebase article. (BZ#1613880) New rsyslog module: omkafka To enable kafka centralized data storage scenarios, you can now forward logs to the kafka infrastructure using the new omkafka module. (BZ#1542497) rsyslog imfile now supports symlinks With this update, the rsyslog imfile module delivers better performance and more configuration options. This allows you to use the module for more complicated file monitoring use cases. For example, you can now use file monitors with glob patterns anywhere along the configured path and rotate symlink targets with increased data throughput. (BZ#1614179) The default rsyslog configuration file format is now non-legacy The configuration files in the rsyslog packages now use the non-legacy format by default. The legacy format can be still used, however, mixing current and legacy configuration statements has several constraints. Configurations carried from RHEL releases should be revised. See the rsyslog.conf(5) man page for more information. (BZ#1619645) Audit 3.0 replaces audispd with auditd With this update, functionality of audispd has been moved to auditd . As a result, audispd configuration options are now part of auditd.conf . In addition, the plugins.d directory has been moved under /etc/audit . The current status of auditd and its plug-ins can now be checked by running the service auditd state command. (BZ#1616428) tangd_port_t allows changes of the default port for Tang This update introduces the tangd_port_t SELinux type that allows the tangd service run as confined with SELinux enforcing mode. That change helps to simplify configuring a Tang server to listen on a user-defined port and it also preserves the security level provided by SELinux in enforcing mode. See the Configuring automated unlocking of encrypted volumes using policy-based decryption section for more information. ( BZ#1664345 ) New SELinux booleans This update of the SELinux system policy introduces the following booleans: colord_use_nfs mysql_connect_http pdns_can_network_connect_db ssh_use_tcpd sslh_can_bind_any_port sslh_can_connect_any_port virt_use_pcscd To get a list of booleans including their meaning, and to find out if they are enabled or disabled, install the selinux-policy-devel package and use: (JIRA:RHELPLAN-10347) SELinux now supports systemd No New Privileges This update introduces the nnp_nosuid_transition policy capability that enables SELinux domain transitions under No New Privileges (NNP) or nosuid if nnp_nosuid_transition is allowed between the old and new contexts. The selinux-policy packages now contain a policy for systemd services that use the NNP security feature. The following rule describes allowing this capability for a service: For example: The distribution policy now also contains an m4 macro interface, which can be used in SELinux security policies for services that use the init_nnp_daemon_domain() function. (BZ#1594111) Support for a new map permission check on the mmap syscall The SELinux map permission has been added to control memory mapped access to files, directories, sockets, and so on. This allows the SELinux policy to prevent direct memory access to various file system objects and ensure that every such access is revalidated. (BZ#1592244) SELinux now supports getrlimit permission in the process class This update introduces a new SELinux access control check, process:getrlimit , which has been added for the prlimit() function. This enables SELinux policy developers to control when one process attempts to read and then modify the resource limits of another process using the process:setrlimit permission. Note that SELinux does not restrict a process from manipulating its own resource limits through prlimit() . See the prlimit(2) and getrlimit(2) man pages for more information. (BZ#1549772) selinux-policy now supports VxFS labels This update introduces support for Veritas File System (VxFS) security extended attributes (xattrs). This enables to store proper SELinux labels with objects on the file system instead of the generic vxfs_t type. As a result, systems with VxFS with full support for SELinux are more secure. (BZ#1483904) Compile-time security hardening flags are applied more consistently Compile-time security hardening flags are applied more consistently on RPM packages in the RHEL 8 distribution, and the redhat-rpm-config package now automatically provides security hardening flags. The applied compile-time flags also help to meet Common Criteria (CC) requirements. The following security hardening flags are applied: For detection of buffer-overflow errors: D_FORTIFY_SOURCE=2 Standard library hardening that checks for C++ arrays, vectors, and strings: D_GLIBCXX_ASSERTIONS For Stack Smashing Protector (SSP): fstack-protector-strong For exception hardening: fexceptions For Control-Flow Integrity (CFI): fcf-protection=full (only on AMD and Intel 64-bit architectures) For Address Space Layout Randomization (ASLR): fPIE (for executables) or fPIC (for libraries) For protection against the Stack Clash vulnerability: fstack-clash-protection (except ARM) Link flags to resolve all symbols on startup: -Wl , -z,now See the gcc(1) man page for more information. (JIRA:RHELPLAN-2306) 5.1.16. Virtualization qemu-kvm 2.12 in RHEL 8 Red Hat Enterprise Linux 8 is distributed with qemu-kvm 2.12. This version fixes multiple bugs and adds a number of enhancements over the version 1.5.3, available in Red Hat Enterprise Linux 7. Notably, the following features have been introduced: Q35 guest machine type UEFI guest boot NUMA tuning and pinning in the guest vCPU hot plug and hot unplug guest I/O threading Note that some of the features available in qemu-kvm 2.12 are not supported on Red Hat Enterprise Linux 8. For detailed information, see "Feature support and limitations in RHEL 8 virtualization" on the Red Hat Customer Portal. (BZ#1559240) The Q35 machine type is now supported by virtualization Red hat Enterprise Linux 8 introduces the support for Q35 , a more modern PCI Express-based machine type. This provides a variety of improvements in features and performance of virtual devices, and ensures that a wider range of modern devices are compatible with virtualization. In addition, virtual machines created in Red Hat Enterprise Linux 8 are set to use Q35 by default. Also note that the previously default PC machine type has become deprecated and should only be used when virtualizing older operating systems that do not support Q35. (BZ#1599777) Post-copy virtual machine migration RHEL 8 makes it possible to perform a post-copy migration of KVM virtual machines (VMs). When used, post-copy migration pauses the migrating VM's vCPUs on the source host, transfers only a minimum of memory pages, activates the VM's vCPUs on the destination host, and transfers the remaining memory pages while the VM is running on the destination. This significantly reduces the downtime of the migrated VM, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source VM change. As such, it is optimal for migrating VMs in heavy continuous use, which would not be possible to migrate with the standard pre-copy migration. (JIRA:RHELPLAN-14323) virtio-gpu is now supported by KVM virtualization The virtio-gpu display device has been introduced for KVM virtual machines (VMs). virtio-gpu improves VM graphical performance and also enables various enhancements for virtual GPU devices to be implemented in the future. (JIRA:RHELPLAN-14329) KVM supports UMIP in RHEL 8 KVM virtualization now supports the User-Mode Instruction Prevention (UMIP) feature, which can help prevent user-space applications from accessing to system-wide settings. This reduces the potential vectors for privilege escalation attacks, and thus makes the KVM hypervisor and its guest machines more secure. (BZ#1494651) Additional information in KVM guest crash reports The crash information that KVM hypervisor generates if a guest terminates unexpectedly or becomes unresponsive has been expanded. This makes it easier to diagnose and fix problems in KVM virtualization deployments. (BZ#1508139) NVIDIA vGPU is now compatible with the VNC console When using the NVIDIA virtual GPU (vGPU) feature, it is now possible to use the VNC console to display the visual output of the guest. (BZ#1497911) Ceph is supported by virtualization With this update, Ceph storage is supported by KVM virtualization on all CPU architectures supported by Red Hat. (BZ#1578855) Interactive boot loader for KVM virtual machines on IBM Z When booting a KVM virtual machine on an IBM Z host, the QEMU boot loader firmware can now present an interactive console interface of the guest OS. This makes it possible to troubleshoot guest OS boot problems without access to the host environment. (BZ#1508137) IBM z14 ZR1 supported in virtual machines The KVM hypervisor now supports the CPU model of the IBM z14 ZR1 server. This enables using the features of this CPU in KVM virtual machines that run on an IBM Z system. (BZ#1592337) KVM supports Telnet 3270 on IBM Z When using RHEL 8 as a host on an IBM Z system, it is now possible to connect to virtual machines on the host using Telnet 3270 clients. (BZ#1570029) QEMU sandboxing has been added In Red Hat Enterprise Linux 8, the QEMU emulator introduces the sandboxing feature. QEMU sandboxing provides configurable limitations to what systems calls QEMU can perform, and thus makes virtual machines more secure. Note that this feature is enabled and configured by default. (JIRA:RHELPLAN-10628) PV TLB Flush Hyper-V enlightenment RHEL 8 adds the PV TLB Flush Hyper-V Enlightenment feature. This improves the performance of Windows virtual machines (VMs) that run in overcommitted environments on the KVM hypervisor. (JIRA:RHELPLAN-14330) New machine types for KVM virtual machines on IBM POWER Multiple new rhel-pseries machine types have been enabled for KVM hypervisors running on IBM POWER 8 and IBM POWER 9 systems. This makes it possible for virtual machines (VMs) hosted on RHEL 8 on an IBM POWER system to correctly use the CPU features of these machine types. In addition, this allows for migrating VMs on IBM POWER to a more recent version of the KVM hypervisor. (BZ#1585651, BZ#1595501) GFNI and CLDEMOT instruction sets enabled for Intel Xeon SnowRidge Virtual machines (VMs) running in a RHEL 8 host on an Intel Xeon SnowRidge system are now able to use the GFNI and CLDEMOT instruction sets. This may significantly increase the performance of such VMs in certain scenarios. (BZ#1494705) IPv6 enabled for OVMF The IPv6 protocol is now enabled on Open Virtual Machine Firmware (OVMF). This makes it possible for virtual machines that use OVMF to take advantage of a variety of network boot improvements that IPv6 provides. (BZ#1536627) A VFIO-based block driver for NVMe devices has been added The QEMU emulator introduces a driver based on virtual function I/O (VFIO) for Non-volatile Memory Express (NVMe) devices. The driver communicates directly with NVMe devices attached to virtual machines (VMs) and avoids using the kernel system layer and its NVMe drivers. As a result, this enhances the performance of NVMe devices in virtual machines. (BZ#1519004) Multichannel support for the Hyper-V Generic UIO driver RHEL 8 now supports the multichannel feature for the Hyper-V Generic userspace I/O (UIO) driver. This makes it possible for RHEL 8 VMs running on the Hyper-V hypervisor to use the Data Plane Development Kit (DPDK) Netvsc Poll Mode driver (PMD), which enhances the networking capabilities of these VMs. Note, however, that the Netvsc interface status currently displays as Down even when it is running and usable. (BZ#1650149) Improved huge page support When using RHEL 8 as a virtualization host, users can modify the size of pages that back memory of a virtual machine (VM) to any size that is supported by the CPU. This can significantly improve the performance of the VM. To configure the size of VM memory pages, edit the VM's XML configuration and add the <hugepages> element to the <memoryBacking> section. (JIRA:RHELPLAN-14607) VMs on POWER 9 hosts can use THP In RHEL 8 hosts running on the IBM POWER 9 architecture, virtual machines (VMs) benefit from the transparent huge pages (THP) feature. THP enables the host kernel to dynamically assign huge memory pages to processes and thus improves the performance of VMs with large amounts of memory. (JIRA:RHELPLAN-13440) 5.1.17. Supportability sosreport can report eBPF-based programs and maps The sosreport tool has been enhanced to report any loaded extended Berkeley Packet Filtering (eBPF) programs and maps in Red Hat Enterprise Linux 8. (BZ#1559836) 5.2. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.0 that have a significant impact on users. 5.2.1. Desktop PackageKit can now operate on rpm packages With this update, the support for operating on rpm packages has been added into PackageKit . (BZ#1559414) 5.2.2. Graphics infrastructures QEMU does not handle 8-byte ggtt entries correctly QEMU occasionally splits an 8-byte ggtt entry write to two consecutive 4-byte writes. Each of these partial writes can trigger a separate host ggtt write. Sometimes the two ggtt writes are combined incorrectly. Consequently, translation to a machine address fails, and an error log occurs. (BZ#1598776) 5.2.3. Identity Management The Enterprise Security Client uses the opensc library for token detection Red Hat Enterprise Linux 8.0 only supports the opensc library for smart cards. With this update, the Enterprise Security Client (ESC) use opensc for token detection instead of the removed coolkey library. As a result, applications correctly detect supported tokens. (BZ#1538645) Certificate System now supports rotating debug logs Previously, Certificate System used a custom logging framework, which did not support log rotation. As a consequence, debug logs such as /var/log/pki/ instance_name /ca/debug grew indefinitely. With this update, Certificate System uses the java.logging.util framework, which supports log rotation. As a result, you can configure log rotation in the /var/lib/pki/ instance_name /conf/logging.properties file. For further information on log rotation, see documentation for the java.util.logging package. (BZ#1565073) Certificate System no longer logs SetAllPropertiesRule operation warnings when the service starts Previously, Certificate System logged warnings on the SetAllPropertiesRule operation in the /var/log/messages log file when the service started. The problem has been fixed, and the mentioned warnings are no longer logged. (BZ#1424966) The Certificate System KRA client parses Key Request responses correctly Previously, Certificate System switched to a new JSON library. As a consequence, serialization for certain objects differed, and the Python key recovery authority (KRA) client failed to parse Key Request responses. The client has been modified to support responses using both the old and the new JSON library. As a result, the Python KRA client parses Key Request responses correctly. (BZ#1623444) 5.2.4. Compilers and development tools GCC no longer produces false positive warnings about out-of-bounds access Previously, when compiling with the -O3 optimization level option, the GNU Compiler Collection (GCC) occasionally returned a false positive warning about an out-of-bounds access, even if the compiled code did not contain it. The optimization has been fixed and GCC no longer displays the false positive warning. (BZ#1246444) ltrace displays large structures correctly Previously, the ltrace tool could not correctly print large structures returned from functions. Handling of large structures in ltrace has been improved and they are now printed correctly. (BZ#1584322) GCC built-in function __builtin_clz returns correct values on IBM Z Previously, the FLOGR instruction of the IBM Z architecture was incorrectly folded by the GCC compiler. As a consequence, the __builtin_clz function using this instruction could return wrong results when the code was compiled with the -funroll-loops GCC option. This bug has been fixed and the function now provides correct results. (BZ#1652016) GDB provides nonzero exit status when last command in batch mode fails Previously, GDB always exited with status 0 when running in batch mode, regardless of errors in the commands. As a consequence, it was not possible to determine whether the commands succeeded. This behavior has been changed and GDB now exits with status 1 when an error occurs in the last command. This preserves compatibility with the behavior where all commands are executed. As a result, it is now possible to determine if GDB batch mode execution is successful. (BZ#1491128) 5.2.5. File systems and storage Higher print levels no longer cause iscsiadm to terminate unexpectedly Previously, the iscsiadm utility terminated unexpectedly when the user specified a print level higher than 0 with the --print or -P option. This problem has been fixed, and all print levels now work as expected. (BZ#1582099) multipathd no longer disables the path when it fails to get the WWID of a path Previously, the multipathd service treated a failed attempt at getting a path's WWID as getting an empty WWID. If multipathd failed to get the WWID of a path, it sometimes disabled that path. With this update, multipathd continues to use the old WWID if it fails to get the WWID when checking to see if it has changed. As a result, multipathd no longer disables paths when it fails to get the WWID, when checking if the WWID has changed. ( BZ#1673167 ) 5.2.6. High availability and clusters New /etc/sysconfig/pcsd option to reject client-initiated SSL/TLS renegotiation When TLS renegotiation is enabled on the server, a client is allowed to send a renegotiation request, which initiates a new handshake. Computational requirements of a handshake are higher on a server than on a client. This makes the server vulnerable to DoS attacks. With this fix, the setting PCSD_SSL_OPTIONS in the /etc/sysconfig/pcsd configuration file accepts the OP_NO_RENEGOTIATION option to reject renegotiations. Note that the client can still open multiple connections to a server with a handshake performed in all of them. (BZ#1566430) A removed cluster node is no longer displayed in the cluster status Previously, when a node was removed with the pcs cluster node remove command, the removed node remained visible in the output of a pcs status display. With this fix, the removed node is no longer displayed in the cluster status. (BZ#1595829) Fence agents can now be configured using either newer, preferred parameter names or deprecated parameter names A large number of fence agent parameters have been renamed while the old parameter names are still supported as deprecated. Previously, pcs was not able to set the new parameters unless used with the --force option. With this fix, pcs now supports the renamed fence agent parameters while maintaining support for the deprecated parameters. (BZ#1436217) The pcs command now correctly reads the XML status of a cluster for display The pcs command runs the crm_mon utility to get the status of a cluster in XML format. The crm_mon utility prints XML to standard output and warnings to standard error output. Previously pcs mixed XML and warnings into one stream and was then unable to parse it as XML. With this fix, standard and error outputs are separated in pcs and reading the XML status of a cluster works as expected. (BZ#1578955) Users no longer advised to destroy clusters when creating new clusters with nodes from existing clusters Previously, when a user specified nodes from an existing cluster when running the pcs cluster setup command or when creating a cluster with the pcsd Web UI, pcs reported that as an error and suggested that the user destroy the cluster on the nodes. As a result, users would destroy the cluster on the nodes, breaking the cluster the nodes were part of as the remaining nodes would still consider the destroyed nodes to be part of the cluster. With this fix, users are instead advised to remove nodes from their cluster, better informing them of how to address the issue without breaking their clusters. (BZ#1596050) pcs commands no longer interactively ask for credentials When a non-root user runs a pcs command that requires root permission, pcs connects to the locally running pcsd daemon and passes the command to it, since the pcsd daemon runs with root permissions and is capable of running the command. Previously, if the user was not authenticated to the local pcsd daemon, pcs asked for a user name and a password interactively. This was confusing to the user and required special handling in scripts running pcs . With this fix, if the user is not authenticated then pcs exits with an error advising what to do: Either run pcs as root or authenticate using the new pcs client local-auth command. As a result, pcs commands do not interactively ask for credentials, improving the user experience. (BZ#1554310) The pcsd daemon now starts with its default self-generated SSL certificate when crypto-policies is set to FUTURE . A crypto-policies setting of FUTURE requires RSA keys in SSL certificates to be at least 3072b long. Previously, the pcsd daemon would not start when this policy was set since it generates SSL certificates with a 2048b key. With this update, the key size of pcsd self-generated SSL certificates has been increased to 3072b and pcsd now starts with its default self-generated SSL certificate. (BZ#1638852) The pcsd service now starts when the network is ready Previously, When a user configured pcsd to bind to a specific IP address and the address was not ready during boot when pcsd attempted to start up, then pcsd failed to start and a manual intervention was required to start pcsd . With this fix, pcsd.service depends on network-online.target . As a result, pcsd starts when the network is ready and is able to bind to an IP address. (BZ#1640477) 5.2.7. Networking Weak TLS algorithms are no longer allowed for glib-networking Previously, the glib-networking package was not compatible with RHEL 8 System-wide Crypto Policy. As a consequence, applications using the glib library for networking might allow Transport Layer Security (TLS) connections using weak algorithms than the administrator intended. With this update, the system-wide crypto policy is applied, and now applications using glib for networking allow only TLS connections that are acceptable according to the policy. (BZ#1640534) 5.2.8. Security SELinux policy now allows iscsiuio processes to connect to the discovery portal Previously, SELinux policy was too restrictive for iscsiuio processes and these processes were not able to access /dev/uio* devices using the mmap system call. As a consequence, connection to the discovery portal failed. This update adds the missing rules to the SELinux policy and iscsiuio processes work as expected in the described scenario. (BZ#1626446) 5.2.9. Subscription management dnf and yum can now access the repos regardless of subscription-manager values Previously, the dnf or yum commands ignored the https:// prefix from a URL added by the subscription-manager service. The updated dnf or yum commands do not ignore invalid https:// URLs. As a consequence, dnf and yum failed to access the repos. To fix the problem, a new configuration variable, proxy_scheme has been added to the /etc/rhsm/rhsm.conf file and the value can be set to either http or https . If no value is specified, subscription-manager set http by default which is more commonly used. Note that if the proxy uses http , most users should not change anything in the configuration in /etc/rhsm/rhsm.conf . If the proxy uses https , users should update the value of proxy_scheme to https . Then, in both cases, users need to run the subscription-manager repos --list command or wait for the rhsmcertd daemon process to regenerate the /etc/yum.repos.d/redhat.repo properly. ( BZ#1654531 ) 5.2.10. Virtualization Mounting ephemeral disks on Azure now works more reliably Previously, mounting an ephemeral disk on a virtual machine (VM) running on the Microsoft Azure platform failed if the VM was "stopped(deallocated)" and then started. This update ensures that reconnecting disks is handled correctly in the described circumstances, which prevents the problem from occurring. (BZ#1615599) 5.3. Technology previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.0. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 5.3.1. Kernel eBPF available as a Technology Preview The extended Berkeley Packet Filtering (eBPF) feature is available as a Technology Preview for both networking and tracing. eBPF enables the user space to attach custom programs onto a variety of points (sockets, trace points, packet reception) to receive and process data. The feature includes a new system call bpf() , which supports creating various types of maps, and also to insert various types of programs into the kernel. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as a root user. See the bpf (2) man page for more information. (BZ#1559616) BCC is available as a Technology Preview BPF Compiler Collection (BCC) is a user space tool kit for creating efficient kernel tracing and manipulation programs that is available as a Technology Preview in Red Hat Enterprise Linux 8. BCC provides tools for I/O analysis, networking, and monitoring of Linux operating systems using the extended Berkeley Packet Filtering (eBPF) . (BZ#1548302) Control Group v2 available as a Technology Preview in RHEL 8 Control Group v2 mechanism is a unified hierarchy control group. Control Group v2 organizes processes hierarchically and distributes system resources along the hierarchy in a controlled and configurable manner. Unlike the version, Control Group v2 has only a single hierarchy. This single hierarchy enables the Linux kernel to: Categorize processes based on the role of their owner. Eliminate issues with conflicting policies of multiple hierarchies. Control Group v2 supports numerous controllers: CPU controller regulates the distribution of CPU cycles. This controller implements: Weight and absolute bandwidth limit models for normal scheduling policy. Absolute bandwidth allocation model for real time scheduling policy. Memory controller regulates the memory distribution. Currently, the following types of memory usages are tracked: Userland memory - page cache and anonymous memory. Kernel data structures such as dentries and inodes. TCP socket buffers. I/O controller regulates the distribution of I/O resources. Writeback controller interacts with both Memory and I/O controllers and is Control Group v2 specific. The information above was based on link: https://www.kernel.org/doc/Documentation/cgroup-v2.txt . You can refer to the same link to obtain more information about particular Control Group v2 controllers. (BZ#1401552) early kdump available as a Technology Preview in Red Hat Enterprise Linux 8 The early kdump feature allows the crash kernel and initramfs to load early enough to capture the vmcore information even for early crashes. For more details about early kdump , see the /usr/share/doc/kexec-tools/early-kdump-howto.txt file. (BZ#1520209) The ibmvnic device driver available as a Technology Preview With Red Hat Enterprise Linux 8.0, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic , is available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ#1524683) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 5.3.2. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) 5.3.3. Hardware enablement The cluster-aware MD RAID1 is available as a technology preview. RAID1 cluster is not enabled by default in the kernel space. If you want to have a try with RAID1 cluster, you need to build the kernel with RAID1 cluster as a module first, use the following steps: Enter the make menuconfig command. Enter the make && make modules && make modules_install && make install command. Enter the reboot command. ( BZ#1654482 ) 5.3.4. Identity Management DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) 5.3.5. File systems and storage Aero adapters available as a Technology Preview The following Aero adapters are available as a Technology Preview: PCI ID 0x1000:0x00e2 and 0x1000:0x00e6, controlled by the mpt3sas driver PCI ID 0x1000:Ox10e5 and 0x1000:0x10e6, controlled by the megaraid_sas driver (BZ#1663281) Stratis is now available Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . (JIRA:RHELPLAN-1212) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8.0, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) 5.3.6. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on the podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) 5.3.7. Networking XDP available as a Technology Preview The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation. (BZ#1503672) eBPF for tc available as a Technology Preview As a Technology Preview, the Traffic Control (tc) kernel subsystem and the tc tool can attach extended Berkeley Packet Filtering (eBPF) programs as packet classifiers and actions for both ingress and egress queueing disciplines. This enables programmable packet processing inside the kernel network data path. ( BZ#1699825 ) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) TIPC available as a Technology Preview The Transparent Inter Process Communication ( TIPC ) is a protocol specially designed for efficient communication within clusters of loosely paired nodes. It works as a kernel module and provides a tipc tool in iproute2 package to allow designers to create applications that can communicate quickly and reliably with other applications regardless of their location within the cluster. This feature is available as a Technology Preview. (BZ#1581898) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. (BZ#1906489) 5.3.8. Red Hat Enterprise Linux system roles The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux timesync For more information, see the Knowledgebase article about RHEL system roles . (BZ#1812552) 5.3.9. Virtualization AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 15 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working. (BZ#1528684) Nested virtualization now available on IBM POWER 9 As a Technology Preview, it is now possible to use the nested virtualization features on RHEL 8 host machines running on IBM POWER 9 systems. Nested virtualization enables KVM virtual machines (VMs) to act as hypervisors, which allows for running VMs inside VMs. Note that nested virtualization also remains a Technology Preview on AMD64 and Intel 64 systems. Also note that for nested virtualization to work on IBM POWER 9, the host, the guest, and the nested guests currently all need to run one of the following operating systems: RHEL 8 RHEL 7 for POWER 9 (BZ#1505999, BZ#1518937) KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) 5.3.10. Containers The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) 5.4. Removed functionality This chapter lists functionalities that were supported in RHEL 7 but are no longer available in RHEL 8.0. 5.4.1. Removed hardware support This section lists device drivers and adapters that were supported in RHEL 7 but are no longer available in RHEL 8.0. 5.4.1.1. Removed device drivers 3w-9xxx 3w-sas aic79xx aoe arcmsr ata drivers: acard-ahci sata_mv sata_nv sata_promise sata_qstor sata_sil sata_sil24 sata_sis sata_svw sata_sx4 sata_uli sata_via sata_vsc bfa cxgb3 cxgb3i e1000 floppy hptiop initio isci iw_cxgb3 mptbase mptctl mptsas mptscsih mptspi mtip32xx mvsas mvumi OSD drivers: osd libosd osst pata drivers: pata_acpi pata_ali pata_amd pata_arasan_cf pata_artop pata_atiixp pata_atp867x pata_cmd64x pata_cs5536 pata_hpt366 pata_hpt37x pata_hpt3x2n pata_hpt3x3 pata_it8213 pata_it821x pata_jmicron pata_marvell pata_netcell pata_ninja32 pata_oldpiix pata_pdc2027x pata_pdc202xx_old pata_piccolo pata_rdc pata_sch pata_serverworks pata_sil680 pata_sis pata_via pdc_adma pm80xx(pm8001) pmcraid qla3xxx stex sx8 tulip ufshcd wireless drivers: carl9170 iwl4965 iwl3945 mwl8k rt73usb rt61pci rtl8187 wil6210 5.4.1.2. Removed adapters The following adapters from the aacraid driver have been removed: PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001 PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002 PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003 PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004 PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002 PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002 PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a catapult, PCI ID 0x9005:0x0283 tomcat, PCI ID 0x9005:0x0284 Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285 Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285 Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285 Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285 Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285 Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285 Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285 ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285 ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285 ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286 ASR-2130S (Lancer), PCI ID 0x9005:0x0286 AAR-2820SA (Intruder), PCI ID 0x9005:0x0286 AAR-2620SA (Intruder), PCI ID 0x9005:0x0286 AAR-2420SA (Intruder), PCI ID 0x9005:0x0286 ICP9024RO (Lancer), PCI ID 0x9005:0x0286 ICP9014RO (Lancer), PCI ID 0x9005:0x0286 ICP9047MA (Lancer), PCI ID 0x9005:0x0286 ICP9087MA (Lancer), PCI ID 0x9005:0x0286 ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286 ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285 ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285 ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286 Themisto Jupiter Platform, PCI ID 0x9005:0x0287 Themisto Jupiter Platform, PCI ID 0x9005:0x0200 Callisto Jupiter Platform, PCI ID 0x9005:0x0286 ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285 ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285 AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285 CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285 AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285 AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285 ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285 AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285 ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285 ASR-4005, PCI ID 0x9005:0x0285 IBM 8i (AvonPark), PCI ID 0x9005:0x0285 IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285 IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286 IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286 ASR-4000 (BlackBird), PCI ID 0x9005:0x0285 ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285 ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285 ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286 Perc 320/DC, PCI ID 0x9005:0x0285 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046 Dell PERC2/QC, PCI ID 0x1011:0x0046 HP NetRAID-4M, PCI ID 0x1011:0x0046 Dell Catchall, PCI ID 0x9005:0x0285 Legend Catchall, PCI ID 0x9005:0x0285 Adaptec Catch All, PCI ID 0x9005:0x0285 Adaptec Rocket Catch All, PCI ID 0x9005:0x0286 Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288 The following adapters from the mpt2sas driver have been removed: SAS2004, PCI ID 0x1000:0x0070 SAS2008, PCI ID 0x1000:0x0072 SAS2108_1, PCI ID 0x1000:0x0074 SAS2108_2, PCI ID 0x1000:0x0076 SAS2108_3, PCI ID 0x1000:0x0077 SAS2116_1, PCI ID 0x1000:0x0064 SAS2116_2, PCI ID 0x1000:0x0065 SSS6200, PCI ID 0x1000:0x007E The following adapters from the megaraid_sas driver have been removed: Dell PERC5, PCI ID 0x1028:0x15 SAS1078R, PCI ID 0x1000:0x60 SAS1078DE, PCI ID 0x1000:0x7C SAS1064R, PCI ID 0x1000:0x411 VERDE_ZCR, PCI ID 0x1000:0x413 SAS1078GEN2, PCI ID 0x1000:0x78 SAS0079GEN2, PCI ID 0x1000:0x79 SAS0073SKINNY, PCI ID 0x1000:0x73 SAS0071SKINNY, PCI ID 0x1000:0x71 The following adapters from the qla2xxx driver have been removed: ISP24xx, PCI ID 0x1077:0x2422 ISP24xx, PCI ID 0x1077:0x2432 ISP2422, PCI ID 0x1077:0x5422 QLE220, PCI ID 0x1077:0x5432 QLE81xx, PCI ID 0x1077:0x8001 QLE10000, PCI ID 0x1077:0xF000 QLE84xx, PCI ID 0x1077:0x8044 QLE8000, PCI ID 0x1077:0x8432 QLE82xx, PCI ID 0x1077:0x8021 The following adapters from the qla4xxx driver have been removed: QLOGIC_ISP8022, PCI ID 0x1077:0x8022 QLOGIC_ISP8324, PCI ID 0x1077:0x8032 QLOGIC_ISP8042, PCI ID 0x1077:0x8042 The following adapters from the be2iscsi driver have been removed: BladeEngine 2 (BE2) devices BladeEngine2 10Gb iSCSI Initiator (generic), PCI ID 0x19a2:0x212 OneConnect OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x19a2:0x702 OCe10100 BE2 adapter family, PCI ID 0x19a2:0x703 BladeEngine 3 (BE3) devices OneConnect TOMCAT iSCSI, PCI ID 0x19a2:0x0712 BladeEngine3 iSCSI, PCI ID 0x19a2:0x0222 The following Ethernet adapters controlled by the be2net driver have been removed: BladeEngine 2 (BE2) devices OneConnect TIGERSHARK NIC, PCI ID 0x19a2:0700 BladeEngine2 Network Adapter, PCI ID 0x19a2:0211 BladeEngine 3 (BE3) devices OneConnect TOMCAT NIC, PCI ID 0x19a2:0x0710 BladeEngine3 Network Adapter, PCI ID 0x19a2:0221 The following adapters from the lpfc driver have been removed: BladeEngine 2 (BE2) devices OneConnect TIGERSHARK FCoE, PCI ID 0x19a2:0x0704 BladeEngine 3 (BE3) devices OneConnect TOMCAT FCoE, PCI ID 0x19a2:0x0714 Fibre Channel (FC) devices FIREFLY, PCI ID 0x10df:0x1ae5 PROTEUS_VF, PCI ID 0x10df:0xe100 BALIUS, PCI ID 0x10df:0xe131 PROTEUS_PF, PCI ID 0x10df:0xe180 RFLY, PCI ID 0x10df:0xf095 PFLY, PCI ID 0x10df:0xf098 LP101, PCI ID 0x10df:0xf0a1 TFLY, PCI ID 0x10df:0xf0a5 BSMB, PCI ID 0x10df:0xf0d1 BMID, PCI ID 0x10df:0xf0d5 ZSMB, PCI ID 0x10df:0xf0e1 ZMID, PCI ID 0x10df:0xf0e5 NEPTUNE, PCI ID 0x10df:0xf0f5 NEPTUNE_SCSP, PCI ID 0x10df:0xf0f6 NEPTUNE_DCSP, PCI ID 0x10df:0xf0f7 FALCON, PCI ID 0x10df:0xf180 SUPERFLY, PCI ID 0x10df:0xf700 DRAGONFLY, PCI ID 0x10df:0xf800 CENTAUR, PCI ID 0x10df:0xf900 PEGASUS, PCI ID 0x10df:0xf980 THOR, PCI ID 0x10df:0xfa00 VIPER, PCI ID 0x10df:0xfb00 LP10000S, PCI ID 0x10df:0xfc00 LP11000S, PCI ID 0x10df:0xfc10 LPE11000S, PCI ID 0x10df:0xfc20 PROTEUS_S, PCI ID 0x10df:0xfc50 HELIOS, PCI ID 0x10df:0xfd00 HELIOS_SCSP, PCI ID 0x10df:0xfd11 HELIOS_DCSP, PCI ID 0x10df:0xfd12 ZEPHYR, PCI ID 0x10df:0xfe00 HORNET, PCI ID 0x10df:0xfe05 ZEPHYR_SCSP, PCI ID 0x10df:0xfe11 ZEPHYR_DCSP, PCI ID 0x10df:0xfe12 Lancer FCoE CNA devices OCe15104-FM, PCI ID 0x10df:0xe260 OCe15102-FM, PCI ID 0x10df:0xe260 OCm15108-F-P, PCI ID 0x10df:0xe260 To check the PCI IDs of the hardware on your system, run the lspci -nn command. Note that other adapters from the mentioned drivers that are not listed here remain unchanged. 5.4.1.3. FCoE software removal Fibre Channel over Ethernet (FCoE) software has been removed from Red Hat Enterprise Linux 8. Specifically, the fcoe.ko kernel module is no longer available for creating software FCoE interfaces over Ethernet adapters and drivers. This change is due to a lack of industry adoption for software-managed FCoE. Specific changes to Red Hat Enterprise 8 include: The fcoe.ko kernel module is no longer available. This removes support for software FCoE with Data Center Bridging enabled Ethernet adapters and drivers. Link-level software configuration via Data Center Bridging eXchange (DCBX) using lldpad is no longer supported for FCoE. The fcoe-utils tools (specifically fcoemon ) is configured by default to not validate DCB configuration or communicate with lldpad . The lldpad integration in fcoemon might be permanently disabled. The libhbaapi and libhbalinux libraries are no longer used by fcoe-utils , and will not undergo any direct testing from Red Hat. Support for the following remains unchanged: Currently supported offloading FCoE adapters that appear as Fibre Channel adapters to the operating system and do not use the fcoe-utils management tools, unless stated in a separate note. This applies to select adapters supported by the lpfc and qla2xxx FCoE drivers. Note that the bfa driver is not included in Red Hat Enterprise Linux 8. Currently supported offloading FCoE adapters that do use the fcoe-utils management tools but have their own kernel drivers instead of fcoe.ko and manage DCBX configuration in their drivers and/or firmware, unless stated in a separate note. The fnic , bnx2fc , and qedf drivers will continue to be fully supported in Red Hat Enterprise Linux 8. The libfc.ko and libfcoe.ko kernel modules that are required for some of the supported drivers covered by the statement. 5.4.2. Other removed functionality 5.4.2.1. The web console Internet Explorer is no longer supported by the RHEL 8 web console Support for the Internet Explorer browser has been removed from the RHEL 8 web console, also known as Cockpit. Attempting to open the web console in Internet Explorer now displays an error screen with a list of recommended browsers that can be used instead. (BZ#1619993) 5.4.2.2. Installer and image creation Installer support for Btrfs has been removed in RHEL 8 The Btrfs file system is not supported in Red Hat Enterprise Linux 8. As a result, the Anaconda installer Graphical User Interface (GUI) and the Kickstart commands no longer support Btrfs . (BZ#1533904) Several Kickstart commands and options have been removed The following Kickstart commands and options have been completely removed in RHEL 8. Using them in Kickstart files will cause an error. upgrade (This command had already previously been deprecated.) btrfs part/partition btrfs part --fstype btrfs or partition --fstype btrfs logvol --fstype btrfs raid --fstype btrfs Where only specific options and values are listed, the base command and its other options are still available and not removed. ( BZ#1698613 ) The ntp package has been removed Red Hat Enterprise Linux 7 supported two implementations of the NTP protocol: ntp and chrony . In Red Hat Enterpise Linux 8, only chrony is available. Migration from ntp to chrony is documented in Migrating to chrony . Possible replacements for ntp features that are not supported by chrony are documented in Achieving some settings previously supported by ntp in chrony . (JIRA:RHELPLAN-1842) KDE unsupported in RHEL 8 With Red Hat Enterprise Linux 8, all packages related to KDE Plasma Workspaces (KDE) have been removed, and it is no longer possible to use KDE as an alternative to the default GNOME desktop environment. Red Hat does not support migration from RHEL 7 with KDE to RHEL 8 GNOME. Users of RHEL 7 with KDE are recommended to back up their data and install RHEL 8 with GNOME. (BZ#1581496) gnome-terminal removed support for non-UTF8 locales in RHEL 8 The gnome-terminal application in RHEL 8 and later releases refuses to start when the system locale is set to non-UTF8 because only UTF8 locales are supported. For more information, see the The gnome-terminal application fails to start when the system locale is set to non-UTF8 Knowledgebase article. (JIRA:RHELDOCS-18772) 5.4.2.3. Hardware enablement The e1000 network driver is not supported in RHEL 8 In Red Hat Enterprise Linux 8, the e1000 network driver is not supported. This affects both bare metal and virtual environments. However, the newer e1000e network driver continues to be fully supported in RHEL 8. (BZ#1596240) RHEL 8 does not support the tulip driver With this update, the tulip network driver is no longer supported. As a consequence, when using RHEL 8 on a Generation 1 virtual machine (VM) on the Microsoft Hyper-V hypervisor, the "Legacy Network Adapter" device does not work, which causes PXE installation of such VMs to fail. For the PXE installation to work, install RHEL 8 on a Generation 2 Hyper-V VM. If you require a RHEL 8 Generation 1 VM, use ISO installation. (BZ#1534870) 5.4.2.4. Identity Management NSS databases not supported in OpenLDAP The OpenLDAP suite in versions of Red Hat Enterprise Linux (RHEL) used the Mozilla Network Security Services (NSS) for cryptographic purposes. With RHEL 8, OpenSSL, which is supported by the OpenLDAP community, replaces NSS. OpenSSL does not support NSS databases for storing certificates and keys. However, it still supports privacy enhanced mail (PEM) files that serve the same purpose. (BZ#1570056) sssd-secrets has been removed The sssd-secrets component of the System Security Services Daemon (SSSD) has been removed in Red Hat Enterprise Linux 8. This is because Custodia, a secrets service provider, is no longer actively developed. Use other Identity Management tools to store secrets, for example the Identity Management Vault. (JIRA:RHELPLAN-10441) Selected Python Kerberos packages have been replaced In Red Hat Enterprise Linux (RHEL) 8, the python-gssapi package, python-requests-gssapi module, and urllib-gssapi library have replaced Python Kerberos packages such as python-krbV , python-kerberos , python-requests-kerberos , and python-urllib2_kerberos . Notable benefits include: python-gssapi is easier to use than python-kerberos and python-krbV python-gssapi supports both python 2 and python 3 whereas python-krbV does not the GSSAPI-based packages allow the use of other Generic Security Services API (GSSAPI) mechanisms in addition to Kerberos, such as the NT LAN Manager NTLM for backward compatibility reasons This update improves the maintainability and debuggability of GSSAPI in RHEL 8. (JIRA:RHELPLAN-10444) 5.4.2.5. Compilers and development tools librtkaio removed With this update, the librtkaio library has been removed. This library provided high performance real time asynchronous I/O access for some files, which was based on Linux kernel Asynchronous I/O support (KAIO). As a result of the removal: Applications using the LD_PRELOAD method to load librtkaio display a warning about a missing library, load the librt library instead and run correctly. Applications using the LD_LIBRARY_PATH method to load librtkaio load the librt library instead and run correctly, without any warning. Applications using the dlopen() system call to access librtkaio directly load the librt library instead. Users of librtkaio have the following options: Use the fallback mechanism described above, without any changes to their applications. Change code of their applications to use the librt library, which offers a compatible POSIX-compliant API. Change code of their applications to use the libaio library, which offers a compatible API. Both librt and libaio can provide comparable features and performance under specific conditions. Note that the libaio package has Red Hat compatibility level of 2, while librtk and the removed librtkaio level 1. For more details, see https://fedoraproject.org/wiki/Changes/GLIBC223_librtkaio_removal (BZ#1512006) Valgrind library for MPI debugging support removed The libmpiwrap.so wrapper library for Valgrind provided by the valgrind-openmpi package has been removed. This library enabled Valgrind to debug programs using the Message Passing Interface (MPI). This library was specific to the Open MPI implementation version in versions of Red Hat Enterprise Linux. Users of libmpiwrap.so are encouraged to build their own version from upstream sources specific to their MPI implementation and version. Supply these custom-built libraries to Valgrind using the LD_PRELOAD technique. (BZ#1500481) Development headers and static libraries removed from valgrind-devel Previously, the valgrind-devel sub-package used to include development files for developing custom valgrind tools. This update removes these files because they do not have a guaranteed API, have to be linked statically, and are unsupported. The valgrind-devel package still does contain the development files for valgrind-aware programs and header files such as valgrind.h , callgrind.h , drd.h , helgrind.h , and memcheck.h , which are stable and well supported. (BZ#1538009) The nosegneg libraries for 32-bit Xen have been removed Previously, the glibc i686 packages contained an alternative glibc build, which avoided the use of the thread descriptor segment register with negative offsets ( nosegneg ). This alternative build was only used in the 32-bit version of the Xen Project hypervisor without hardware virtualization support, as an optimization to reduce the cost of full paravirtualization. These alternative builds are no longer used and they have been removed. (BZ#1514839) GCC no longer builds Ada, Go, and Objective C/C++ code Capability for building code in the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed from the GCC compiler. To build Go code, use the Go Toolset instead. (BZ#1650618) make new operator != causes a different interpretation of certain existing makefile syntax The != shell assignment operator has been added to GNU make as an alternative to the USD(shell ... ) function to increase compatibility with BSD makefiles. As a consequence, variables with name ending in exclamation mark and immediately followed by assignment such as variable!=value are now interpreted as the shell assignment. To restore the behavior, add a space after the exclamation mark, such as variable! =value . For more details and differences between the operator and the function, see the GNU make manual. (BZ#1650675) Sun RPC and NIS interfaces removed from glibc The glibc library no longer provides Sun RPC and NIS interfaces for new applications. These interfaces are now available only for running legacy applications. Developers must change their applications to use the libtirpc library instead of Sun RPC and libnsl2 instead of NIS. Applications can benefit from IPv6 support in the replacement libraries. (BZ#1533608) 5.4.2.6. File systems and storage Btrfs has been removed The Btrfs file system has been removed in Red Hat Enterprise Linux 8. This includes the following components: The btrfs.ko kernel module The btrfs-progs package The snapper package You can no longer create or mount Btrfs file systems in Red Hat Enterprise Linux 8. (BZ#1582530) The /etc/sysconfig/nfs file and legacy NFS service names are no longer available In Red Hat Enterprise Linux 8.0, the NFS configuration has moved from the /etc/sysconfig/nfs configuration file, which was used in Red Hat Enterprise Linux 7, to /etc/nfs.conf . The /etc/nfs.conf file uses a different syntax. Red Hat Enterprise Linux 8 attempts to automatically convert all options from /etc/sysconfig/nfs to /etc/nfs.conf when upgrading from Red Hat Enterprise Linux 7. Both configuration files are supported in Red Hat Enterprise Linux 7. Red Hat recommends that you use the new /etc/nfs.conf file to make NFS configuration in all versions of Red Hat Enterprise Linux compatible with automated configuration systems. Additionally, the following NFS service aliases have been removed and replaced by their upstream names: nfs.service , replaced by nfs-server.service nfs-secure.service , replaced by rpc-gssd.service rpcgssd.service , replaced by rpc-gssd.service nfs-idmap.service , replaced by nfs-idmapd.service rpcidmapd.service , replaced by nfs-idmapd.service nfs-lock.service , replaced by rpc-statd.service nfslock.service , replaced by rpc-statd.service (BZ#1639432) VDO no longer supports read cache The read cache functionality has been removed from Virtual Data Optimizer (VDO). The read cache is always disabled on VDO volumes, and you can no longer enable it using the --readCache option of the vdo utility. Red Hat might reintroduce the VDO read cache in a later Red Hat Enterprise Linux release, using a different implementation. (BZ#1639512) Removal of clvmd for managing shared storage devices LVM no longer uses clvmd (cluster lvm daemon) for managing shared storage devices. Instead, LVM now uses lvmlockd (lvm lock daemon). For details about using lvmlockd , see the lvmlockd(8) man page. For details about using shared storage in general, see the lvmsystemid(7) man page. For information on using LVM in a Pacemaker cluster, see the help screen for the LVM-activate resource agent. For an example of a procedure to configure a shared logical volume in a Red Hat High Availability cluster, see Configuring a GFS2 file system in a cluster . (BZ#1643543) Removal of lvmetad daemon LVM no longer uses the lvmetad daemon for caching metadata, and will always read metadata from disk. LVM disk reading has been reduced, which reduces the benefits of caching. Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad setting in the lvm.conf configuration file. The correct way to disable autoactivation continues to be setting auto_activation_volume_list in the lvm.conf file. (BZ#1643545) LVM can no longer manage devices formatted with the GFS pool volume manager or the lvm1 metadata format. LVM can no longer manage devices formatted with the GFS pool volume manager or the`lvm1` metadata format. if you created your logical volume before Red Hat Enterprise Linux 4 was introduced, then this may affect you. Volume groups using the lvm1 format should be converted to the lvm2 format using the vgconvert command. (BZ#1643547) LVM libraries and LVM Python bindings have been removed The lvm2app library and LVM Python bindings, which were provided by the lvm2-python-libs package, have been removed. Red Hat recommends the following solutions instead: The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python version 3. The LVM command-line utilities with JSON formatting; this formatting has been available since the lvm2 package version 2.02.158. The libblockdev library, included in AppStream, for C/C++ You must port any applications using the removed libraries and bindings to the D-Bus API before upgrading to Red Hat Enterprise Linux 8. (BZ#1643549) The ability to mirror the log for LVM mirrors has been removed The mirrored log feature of mirrored LVM volumes has been removed. Red Hat Enterprise Linux (RHEL) 8 no longer supports creating or activating LVM volumes with a mirrored mirror log. The recommended replacements are: RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure. Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command: lvconvert --mirrorlog disk my_vg/my_lv . (BZ#1643562) The dmraid package has been removed The dmraid package has been removed from Red Hat Enterprise Linux 8. Users requiring support for combined hardware and software RAID host bus adapters (HBA) should use the mdadm utility, which supports native MD software RAID, the SNIA RAID Common Disk Data Format (DDF), and the Intel(R) Matrix Storage Manager (IMSM) formats. (BZ#1643576) Software FCoE and Fibre Channel no longer support the target mode Software FCoE: NIC Software FCoE target functionality is removed in Red Hat Enterprise Linux 8.0. Fibre Channel no longer supports the target mode. Target mode is disabled for the qla2xxx QLogic Fibre Channel driver in Red Hat Enterprise Linux 8.0. ( BZ#1666377 ) 5.4.2.7. Networking The -ok option of the tc command removed The -ok option of the tc command has been removed in Red Hat Enterprise Linux 8. As a workaround, users can implement code to communicate directly via netlink with the kernel. Response messages received, indicate completion and status of sent requests. An alternative way for less time-critical applications is to call tc for each command separately. This may happen with a custom script which simulates the tc -batch behavior by printing OK for each successful tc invocation. (BZ#1640991) Arptables FORWARD is removed from filter tables in RHEL 8 The arptables FORWARD chain functionality has been removed in Red Hat Enterprise Linux (RHEL) 8. You can now use the FORWARD chain of the ebtables tool adding the rules into it. (BZ#1646159) The compile-time support for wireless extensions in wpa_supplicant is disabled The wpa_supplicant package does not support wireless extensions. When a user is trying to use wext as a command-line argument, or trying to use it on old adapters which only support wireless extensions, will not be able to run the wpa_supplicant daemon. (BZ#1537143) 5.4.2.8. Security OpenSCAP API consolidated This update provides OpenSCAP shared library API that has been consolidated. 63 symbols have been removed, 14 added, and 4 have an updated signature. The removed symbols in OpenSCAP 1.3.0 include: symbols that were marked as deprecated in version 1.2.0 SEAP protocol symbols internal helper functions unused library symbols unimplemented symbols (BZ#1618464) securetty is now disabled by default Because of the dynamic nature of tty device files on modern Linux systems, the securetty PAM module has been disabled by default and the /etc/securetty configuration file is no longer included in RHEL. Since /etc/securetty listed many possible devices so that the practical effect in most cases was to allow by default, this change has only a minor impact. However, if you use a more restrictive configuration, you need to add a line enabling the pam_securetty.so module to the appropriate files in the /etc/pam.d directory, and create a new /etc/securetty file. ( BZ#1650701 ) KLIPS has been removed from Libreswan In Red Hat Enterprise Linux 8, support for Kernel IP Security (KLIPS) IPsec stack has been removed from Libreswan . ( BZ#1657854 ) 5.4.2.9. Virtualization IVSHMEM has been disabled The inter-VM shared memory device (IVSHMEM) feature, which provides shared memory between multiple virtual machines, is now disabled in Red Hat Enterprise Linux 8. A virtual machine configured with this device will fail to boot. Similarly, attempting to hot-plug such a device device will fail as well. (BZ#1621817) "virt-install" can no longer use NFS locations With this update, the "virt-install" utility cannot mount NFS locations. As a consequence, attempting to install a virtual machine using "virt-install" with a NFS address as a value of the "--location" option fails. To work around this change, mount your NFS share prior to using "virt-install", or use a HTTP location. (BZ#1643609) 5.5. Deprecated functionality Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 5.5.1. Installer and image creation The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs. auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) 5.5.2. File systems and storage NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see the following article: Why does the 'elevator=' parameter no longer work in RHEL8 . (BZ#1665295) The VDO Ansible module in VDO packages The VDO Ansible module is currently provided by the vdo RPM package. In a future release, the VDO Ansible module will be moved to the Ansible RPM packages. ( BZ#1669537 ) 5.5.3. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) 5.5.4. Kernel The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 5.5.5. Security DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) 5.5.6. Virtualization Virtual machine snapshots are not properly supported in RHEL 8 The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8. Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga , virtio-vga , or qxl devices instead of Cirrus VGA. (BZ#1651994) virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL 8 web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. However, in Red Hat Enterprise Linux 8.0, some features may only be accessible from either virt-manager or the command line. (JIRA:RHELPLAN-10304) 5.5.7. Deprecated packages The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux: 389-ds-base-legacy-tools authd custodia hostname libidn net-tools network-scripts nss-pam-ldapd sendmail yp-tools ypbind ypserv 5.6. Known issues This part describes known issues in Red Hat Enterprise Linux 8. 5.6.1. The web console Logging to RHEL web console with session_recording shell is not possible Currently, the RHEL web console logins will fail for tlog recording-enabled users. RHEL web console requires a user's shell to be present in the /etc/shells directory to allow a successful login. However, if tlog-rec-session is added to /etc/shells , a recorded user is able to disable recording by changing the shell from tlog-rec-session to another shell from /etc/shells , using the "chsh" utility. Red Hat does not recommend adding tlog-rec-session to /etc/shells for this reason. (BZ#1631905) 5.6.2. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The xorg-x11-drv-fbdev , xorg-x11-drv-vesa , and xorg-x11-drv-vmware video drivers are not installed by default Workstations with specific models of NVIDIA graphics cards and workstations with specific AMD accelerated processing units will not display the graphical login window after a RHEL 8.0 Server installation. To work around this problem, perform a RHEL 8.0 Workstation installation on a workstation machine. If a RHEL 8.0 Server installation is required on the workstation, manually install the base-x package group after installation by running the yum -y groupinstall base-x command. In addition, virtual machines relying on EFI for graphics support, such as Hyper-V, are also affected. If you selected the Server with GUI base environment on Hyper-V, you might be unable to log in due to a black screen displayed on reboot. To work around this problem on Hyper-v, enable multi- or single-user mode using the following steps: Reboot the virtual machine. During the booting process, select the required kernel using the up and down arrow keys on your keyboard. Press the e key on your keyboard to edit the kernel command line. Add systemd.unit=multi-user.target to the kernel command line in GRUB. Press Ctrl-X to start the virtual machine. After logging in, run the yum -y groupinstall base-x command. Reboot the virtual machine to access the graphical mode. (BZ#1687489) Installation fails when using the reboot --kexec command The RHEL 8 installation fails when using a Kickstart file that contains the reboot --kexec command. To avoid the problem, use the reboot command instead of reboot --kexec in your Kickstart file. ( BZ#1672405 ) Copying the content of the Binary DVD.iso file to a partition omits the .treeinfo and .discinfo files During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the * in the cp <path>/\* <mounted partition>/dir command fails to copy the .treeinfo and .discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the anaconda.log file is the only record of the problem. To work around the problem, copy the missing .treeinfo and .discinfo files to the partition. (BZ#1692746) Anaconda installation includes low limits of minimal resources setting requirements Anaconda initiates the installation on systems with minimal resource settings required available and do not provide message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation. (BZ#1696609) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) 5.6.3. Kernel The i40iw module does not load automatically on boot Due to many i40e NICs not supporting iWarp and the i40iw module not fully supporting suspend/resume, this module is not automatically loaded by default to ensure suspend/resume works properly. To work around this problem, manually edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with a i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1623712) The system sometimes becomes unresponsive when many devices are connected When Red Hat Enterprise Linux 8 configures a large number of devices, a large number of console messages occurs on the system console. This happens, for example, when there are a large number of logical unit numbers (LUNs), with multiple paths to each LUN. The flood of console messages, in addition to other work the kernel is doing, might cause the kernel watchdog to force a kernel panic because the kernel appears to be hung. Because the scan happens early in the boot cycle, the system becomes unresponsive when many devices are connected. This typically occurs at boot time. If kdump is enabled on your machine during the device scan event after boot, the hard lockup results in a capture of a vmcore image. To work around this problem, increase the watchdog lockup timer. To do so, add the watchdog_thresh= N option to the kernel command line. Replace N with the number of seconds: If you have less than a thousand devices, use 30 . If you have more than a thousand devices, use 60 . For storage, the number of device is the number of paths to all the LUNs: generally, the number of /dev/sd* devices. After applying the workaround, the system no longer becomes unresponsive when configuring a large amount of devices. (BZ#1598448) KSM sometimes ignores NUMA memory policies When the kernel shared memory (KSM) feature is enabled with the merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies. To work around this problem, disable KSM or set the merge_across_nodes parameter to 0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected. (BZ#1153521) The qede driver hangs the NIC and makes it unusable Due to a bug, the qede driver for the 41000 and 45000 QLogic series NICs can cause Firmware upgrade and debug data collection operations to fail and make the NIC unusable or in hung state until reboot (PCI reset) of the host makes the NIC operational again. This issue has been detected in all of the following scenarios: when upgrading Firmware of the NIC using the inbox driver when collecting debug data running the ethtool -d ethx command running the sosreport command as it includes ethtool -d ethx. when the inbox driver initiates automatic debug data collection, such as IO timeout, Mail Box Command timeout and a Hardware Attention. A future erratum from Red Hat will be released via Red Hat Bug Advisory (RHBA) to address this issue. To work around this problem, create a case in https://access.redhat.com/support to request a supported fix for the issue until the RHBA is released. (BZ#1697310) Radix tree symbols were added to kernel-abi-whitelists The following radix tree symbols have been added to the kernel-abi-whitelists package in Red Hat Enterprise Linux 8: __radix_tree_insert __radix_tree_next_slot radix_tree_delete radix_tree_gang_lookup radix_tree_gang_lookup_tag radix_tree_next_chunk radix_tree_preload radix_tree_tag_set The symbols above were not supposed to be present and will be removed from the RHEL8 whitelist. (BZ#1695142) podman fails to checkpoint a container in RHEL 8 The version of the Checkpoint and Restore In Userspace (CRIU) package is outdated in Red Hat Enterprise Linux 8. As a consequence, CRIU does not support container checkpoint and restore functionality and the podman utility fails to checkpoint containers. When running the podman container checkpoint command, the following error message is displayed: 'checkpointing a container requires at least CRIU 31100' (BZ#1689746) early-kdump and standard kdump fail if the add_dracutmodules+=earlykdump option is used in dracut.conf Currently, an inconsistency occurs between the kernel version being installed for early-kdump and the kernel version initramfs is generated for. As a consequence, booting with early-kdump enabled, early-kdump fails. In addition, if early-kdump detects that it is being included in a standard kdump initramfs image, it forces an exit. Therefore the standard kdump service also fails when trying to rebuild kdump initramfs if early-kdump is added as a default dracut module. As a consequence, early-kdump and standard kdump both fail. To work around this problem, do not add add_dracutmodules+=earlykdump or any equivalent configuration in the dracut.conf file. As a result, early-kdump is not included by dracut by default, which prevents the problem from occuring. However, if an early-kdump image is required, it has to be created manually. (BZ#1662911) Debug kernel fails to boot in crash capture environment in RHEL 8 Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment. (BZ#1659609) Network interface is renamed to kdump-<interface-name> when fadump is used When firmware-assisted dump ( fadump ) is utilized to capture a vmcore and store it to a remote machine using SSH or NFS protocol, the network interface is renamed to kdump-<interface-name> if <interface-name> is generic, for example, *eth#, or net#. This problem occurs because the vmcore capture scripts in the initial RAM disk ( initrd ) add the kdump- prefix to the network interface name to secure persistent naming. The same initrd is used also for a regular boot, so the interface name is changed for the production kernel too. (BZ#1745507) 5.6.4. Software management Running yum list under a non-root user causes YUM crash When running the yum list command under a non-root user after the libdnf package has been updated, YUM can terminate unexpectedly. If you hit this bug, run yum list under root to resolve the problem. As a result, subsequent attempts to run yum list under a non-root user no longer cause YUM crash. (BZ#1642458) YUM v4 skips unavailable repositories by default YUM v4 defaults to the "skip_if_unavailable=True" setting for all repositories. As a consequence, if the required repository is not available, the packages from the repository are not considered in the install, search, or update operations. Subsequently, some yum commands and yum-based scripts succeed with exit code 0 even if there are unavailable repositories. Currently, there is no other workaround available than updating the libdnf package. ( BZ#1679509 ) 5.6.5. Infrastructure services The nslookup and host utilities ignore replies from name servers with recursion not available If more name servers are configured and recursion is not available for a name server, the nslookup and host utilities ignore replies from such name server unless it is the one that is last configured. In case of the last configured name server, answer is accepted even without the recursion available flag. However, if the last configured name server is not responding or unreachable, name resolution fails. To work around the problem: Ensure that configured name servers always reply with the recursion available flag set. Allow recursion for all internal clients. To troubleshoot the problem, you can also use the dig utility to detect whether recursion is available or not. (BZ#1599459) 5.6.6. Shells and command-line tools Python binding of the net-snmp package is unavailable The Net-SNMP suite of tools does not provide binding for Python 3 , which is the default Python implementation in RHEL 8. Consequently, python-net-snmp , python2-net-snmp , or python3-net-snmp packages are unavailable in RHEL 8. (BZ#1584510) systemd in debug mode produces unnecessary log messages The systemd system and service manager in debug mode produces unnecessary log messages that start with: List the messages by running: These debug messages are harmless, and you can safely ignore them. Currently, there is no workaround available. ( BZ#1658691 ) ksh with the KEYBD trap mishandles multibyte characters The Korn Shell (KSH) is unable to correctly handle multibyte characters when the KEYBD trap is enabled. Consequently, when the user enters, for example, Japanese characters, ksh displays an incorrect string. To work around this problem, disable the KEYBD trap in the /etc/kshrc file by commenting out the following line: For more details, see a related Knowledgebase solution . ( BZ#1503922 ) 5.6.7. Dynamic programming languages, web and database servers Database servers are not installable in parallel The mariadb and mysql modules cannot be installed in parallel in RHEL 8.0 due to conflicting RPM packages. By design, it is impossible to install more than one version (stream) of the same module in parallel. For example, you need to choose only one of the available streams from the postgresql module, either 10 (default) or 9.6 . Parallel installation of components is possible in Red Hat Software Collections for RHEL 6 and RHEL 7. In RHEL 8, different versions of database servers can be used in containers. (BZ#1566048) Problems in mod_cgid logging If the mod_cgid Apache httpd module is used under a threaded multi-processing module (MPM), which is the default situation in RHEL 8, the following logging problems occur: The stderr output of the CGI script is not prefixed with standard timestamp information. The stderr output of the CGI script is not correctly redirected to a log file specific to the VirtualHost , if configured. (BZ#1633224) The IO::Socket::SSL Perl module does not support TLS 1.3 New features of the TLS 1.3 protocol, such as session resumption or post-handshake authentication, were implemented in the RHEL 8 OpenSSL library but not in the Net::SSLeay Perl module, and thus are unavailable in the IO::Socket::SSL Perl module. Consequently, client certificate authentication might fail and reestablishing sessions might be slower than with the TLS 1.2 protocol. To work around this problem, disable usage of TLS 1.3 by setting the SSL_version option to the !TLSv1_3 value when creating an IO::Socket::SSL object. (BZ#1632600) Generated Scala documentation is unreadable When generating documentation using the scaladoc command, the resulting HTML page is unusable due to missing JavaScript resources. (BZ#1641744) 5.6.8. Desktop qxl does not work on VMs based on Wayland The qxl driver is not able to provide kernel mode setting features on certain hypervisors. Consequently, the graphics based on the Wayland protocol are not available to virtual machines (VMs) that use qxl , and the Wayland-based login screen does not start. To work around the problem, use either : The Xorg display server instead of GNOME Shell on Wayland on VMs based on QuarkXpress Element Library (QXL) graphics. Or The virtio driver instead of the qxl driver for your VMs. (BZ#1641763) The console prompt is not displayed when running systemctl isolate multi-user.target When running the systemctl isolate multi-user.target command from GNOME Terminal in a GNOME Desktop session, only a cursor is displayed, and not the console prompt. To work around the problem, press the Ctrl+Alt+F2 keys. As a result, the console prompt appears. The behavior applies both to GNOME Shell on Wayland and X.Org display server. ( BZ#1678627 ) 5.6.9. Graphics infrastructures Desktop running on X.Org hangs when changing to low screen resolutions When using the GNOME desktop with the X.Org display server, the desktop becomes unresponsive if you attempt to change the screen resolution to low values. To work around the problem, do not set the screen resolution to a value lower than 800 x 600 pixels. (BZ#1655413) radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) 5.6.10. Hardware enablement Backup slave MII status does not work when using the ARP link monitor By default, devices managed by the i40e driver, do source pruning, which drops packets that have the source Media Access Control (MAC) address that matches one of the receive filters. As a consequence, backup slave Media Independent Interface (MII) status does not work when using the Address Resolution Protocol (ARP) monitoring in channel bonding. To work around this problem, disable source pruning by the following command: As a result, the backup slave MII status will work as expected. (BZ#1645433) The HP NMI watchdog in some cases does not generate a crash dump The hpwdt driver for the HP NMI watchdog is sometimes not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. As a consequence, hpwdt in some cases cannot call a panic to generate a crash dump. (BZ#1602962) 5.6.11. Identity Management The KCM credential cache is not suitable for a large number of credentials in a single credential cache The Kerberos Credential Manager (KCM) can handle ccache sizes of up to 64 kB. If it contains too many credentials, Kerberos operations, such as kinit , fail due to a hardcoded limit on the buffer used to transfer data between the sssd-kcm component and the underlying database. To work around this problem, add the ccache_storage = memory option in the kcm section of the /etc/sssd/sssd.conf file. This instructs the kcm responder to only store the credential caches in-memory, not persistently. If you do this, restarting the system or sssd-kcm clears the credential caches. (BZ#1448094) Changing /etc/nsswitch.conf requires a manual system reboot Any change to the /etc/nsswitch.conf file, for example running the authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the /etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the System Security Services Daemon (SSSD) or winbind . ( BZ#1657295 ) Conflicting timeout values prevent SSSD from connecting to servers Some of the default timeout values related to the failover operations used by the System Security Services Daemon (SSSD) are conflicting. Consequently, the timeout value reserved for SSSD to talk to a single server prevents SSSD from trying other servers before the connecting operation as a whole time out. To work around the problem, set the value of the ldap_opt_timeout timeout parameter higher than the value of the dns_resolver_timeout parameter, and set the value of the dns_resolver_timeout parameter higher than the value of the dns_resolver_op_timeout parameter. (BZ#1382750) SSSD can look up only unique certificates in ID overrides When multiple ID overrides contain the same certificate, the System Security Services Daemon (SSSD) is unable to resolve queries for the users that match the certificate. An attempt to look up these users does not return any user. Note that looking up users by using their user name or UID works as expected. (BZ#1446101) SSSD does not correctly handle multiple certificate matching rules with the same priority If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page. (BZ#1447945) SSSD returns incorrect LDAP group membership for local users If the System Security Services Daemon (SSSD) serves users from the local files, the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the id local_user command does not return the user's LDAP group membership. To work around the problem, either revert the order of the databases where the system is looking up the group membership of users in the /etc/nsswitch.conf file, replacing sss files with files sss , or disable the implicit files domain by adding to the [sssd] section in the /etc/sssd/sssd.conf file. As a result, id local_user returns correct LDAP group membership for local users. ( BZ#1652562 ) Sudo rules might not work with id_provider=ad if sudo rules reference group names System Security Services Daemon (SSSD) does not resolve Active Directory group names during the initgroups operation because of an optimization of communication between AD and SSSD by using a cache. The cache entry contains only a Security Identifiers (SID) and not group names until the group is requested by name or ID. Therefore, sudo rules do not match the AD group unless the groups are fully resolved prior to running sudo. To work around this problem, you need to disable the optimization: Open the /etc/sssd/sssd.conf file and add the ldap_use_tokengroups = false parameter in the [domain/example.com] section. ( BZ#1659457 ) Default PAM settings for systemd-user have changed in RHEL 8 which may influence SSSD behavior The Pluggable authentication modules (PAM) stack has changed in Red Hat Enterprise Linux 8. For example, the systemd user session now starts a PAM conversation using the systemd-user PAM service. This service now recursively includes the system-auth PAM service, which may include the pam_sss.so interface. This means that the SSSD access control is always called. Be aware of the change when designing access control rules for RHEL 8 systems. For example, you can add the systemd-user service to the allowed services list. Please note that for some access control mechanisms, such as IPA HBAC or AD GPOs, the systemd-user service is has been added to the allowed services list by default and you do not need to take any action. ( BZ#1669407 ) IdM server does not work in FIPS Due to an incomplete implementation of the SSL connector for Tomcat, an Identity Management (IdM) server with a certificate server installed does not work on machines with the FIPS mode enabled. ( BZ#1673296 ) Samba denies access when using the sss ID mapping plug-in To use Samba as a file server on a RHEL host joined to an Active Directory (AD) domain, the Samba Winbind service must be running even if SSSD is used to manage user and groups from AD. If you join the domain using the realm join --client-software=sssd command or without specifying the --client-software parameter in this command, realm creates only the /etc/sssd/sssd.conf file. When you run Samba on the domain member with this configuration and add a configuration that uses the sss ID mapping back end to the /etc/samba/smb.conf file to share directories, changes in the ID mapping back end can cause errors. Consequently, Samba denies access to files in certain cases, even if the user or group exists and it is known by SSSD. If you plan to upgrade from a RHEL version and the ldap_id_mapping parameter in the /etc/sssd/sssd.conf file is set to True , which is the default, no workaround is available. In this case, do not upgrade the host to RHEL 8 until the problem has been fixed. Possible workarounds in other scenarios: For new installations, join the domain using the realm join --client-software=winbind command. This configures the system to use Winbind instead of SSSD for all user and group lookups. In this case, Samba uses the rid or ad ID mapping plug-in in /etc/samba/smb.conf depending on whether you set the --automatic-id-mapping option to yes (default) or no . If you plan to use SSSD in future or on other systems, using --automatic-id-mapping=no allows an easier migration but requires that you store POSIX UIDs and GIDs in AD for all users and groups. When upgrading from a RHEL version, and if the ldap_id_mapping parameter in the /etc/sssd/sssd.conf file is set to False and the system uses the uidNumber and gidNumber attributes from AD for ID mapping: Change the idmap config <domain> : backend = sss entry in the /etc/samba/smb.conf file to idmap config <domain> : backend = ad Use the systemctl status winbind command to restart the Winbind. ( BZ#1657665 ) The nuxwdog service fails in HSM environments and requires to install the keyutils package in non-HSM environments The nuxwdog watchdog service has been integrated into Certificate System. As a consequence, nuxwdog is no longer provided as a separate package. To use the watchdog service, install the pki-server package. Note that the nuxwdog service has following known issues: The nuxwdog service does not work if you use a hardware storage module (HSM). For this issue, no workaround is available. In a non-HSM environment, Red Hat Enterprise Linux 8.0 does not automatically install the keyutils package as a dependency. To install the package manually, use the dnf install keyutils command. ( BZ#1652269 ) Adding ID overrides of AD users works only in the IdM CLI Currently, adding ID overrides of Active Directory (AD) users to Identity Management (IdM) groups for the purpose of granting access to management roles fails in the IdM Web UI. To work around the problem, use the IdM command-line interface (CLI) instead. Note that if you installed the ipa-idoverride-memberof-plugin package on the IdM server after previously performing certain operations using the ipa utility, Red Hat recommends cleaning up the ipa utility's cache to force it to refresh its view about the IdM server metadata. To do so, remove the content of the ~/.cache/ipa directory for the user under which the ipa utility is executed. For example, for root: ( BZ#1651577 ) No information about required DNS records displayed when enabling support for AD trust in IdM When enabling support for Active Directory (AD) trust in Red Hat Enterprise Linux Identity Management (IdM) installation with external DNS management, no information about required DNS records is displayed. Forest trust to AD is not successful until the required DNS records are added. To work around this problem, run the 'ipa dns-update-system-records --dry-run' command to obtain a list of all DNS records required by IdM. When external DNS for IdM domain defines the required DNS records, establishing forest trust to AD is possible. ( BZ#1665051 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 5.6.12. Compilers and development tools Synthetic functions generated by GCC confuse SystemTap GCC optimization can generate synthetic functions for partially inlined copies of other functions. Tools such as SystemTap and GDB can not distinguish these synthetic functions from real functions. As a consequence, SystemTap can place probes on both synthetic and real function entry points, and thus register multiple probe hits for a single real function call. To work around this problem, SystemTap scripts must be adapted with measures such as detecting recursion and suppressing probes related to inlined partial functions. For example, a script can try to avoid the described problem as follows: Note that this example script does not take into account all possible scenarios, such as missed kprobes or kretprobes, or genuine intended recursion. (BZ#1169184) The ltrace tool does not report function calls Because of improvements to binary hardening applied to all RHEL components, the ltrace tool can no longer detect function calls in binary files coming from RHEL components. As a consequence, ltrace output is empty because it does not report any detected calls when used on such binary files. There is no workaround currently available. As a note, ltrace can correctly report calls in custom binary files built without the respective hardening flags. (BZ#1618748, BZ#1655368) 5.6.13. File systems and storage Unable to discover an iSCSI target using the iscsiuio package Red Hat Enterprise Linux 8 does not allow concurrent access to PCI register areas. As a consequence, a could not set host net params (err 29) error was set and the connection to the discovery portal failed. To work around this problem, set the kernel parameter iomem=relaxed in the kernel command line for the iSCSI offload. This specifically involves any offload using the bnx2i driver. As a result, connection to the discovery portal is now successful and iscsiuio package now works correctly. (BZ#1626629) VDO volumes lose deduplication advice after moving to a different-endian platform Virtual Data Optimizer (VDO) writes the Universal Deduplication Service (UDS) index header in the endian format native to your platform. VDO considers the UDS index corrupt and overwrites it with a new, blank index if you move your VDO volume to a platform that uses a different endian. As a consequence, any deduplication advice stored in the UDS index prior to being overwritten is lost. VDO is then unable to deduplicate newly written data against the data that was stored before you moved the volume, leading to lower space savings. ( BZ#1696492 ) The XFS DAX mount option is incompatible with shared copy-on-write data extents An XFS file system formatted with the shared copy-on-write data extents feature is not compatible with the -o dax mount option. As a consequence, mounting such a file system with -o dax fails. To work around the problem, format the file system with the reflink=0 metadata option to disable shared copy-on-write data extents: As a result, mounting the file system with -o dax is successful. For more information, see Creating a file system DAX namespace on an NVDIMM . (BZ#1620330) Certain SCSI drivers might sometimes use an excessive amount of memory Certain SCSI drivers use a larger amount of memory than in RHEL 7. In certain cases, such as vPort creation on a Fibre Channel host bus adapter (HBA), the memory usage might be excessive, depending upon the system configuration. The increased memory usage is caused by memory preallocation in the block layer. Both the multiqueue block device scheduling (BLK-MQ) and the multiqueue SCSI stack (SCSI-MQ) preallocate memory for each I/O request in RHEL 8, leading to the increased memory usage. (BZ#1733278) 5.6.14. Networking nftables does not support multi-dimensional IP set types The nftables packet-filtering framework does not support set types with concatenations and intervals. Consequently, you cannot use multi-dimensional IP set types, such as hash:net,port , with nftables . To work around this problem, use the iptables framework with the ipset tool if you require multi-dimensional IP set types. (BZ#1593711) The TRACE target in the iptables-extensions(8) man page does not refer to the nf_tables variant The description of the TRACE target in the iptables-extensions(8) man page refers only to the compat variant, but Red Hat Enterprise Linux (RHEL) 8.0 uses the nf_tables variant. The nftables -based iptables utility in RHEL uses the meta nftrace expression internally. Therefore, the kernel does not print TRACE events in the kernel log but sends them to the user space instead. However, the man page does not reference the xtables-monitor command-line utility to display these events. ( BZ#1658734 ) RHEL 8 shows the status of an 802.3ad bond as "Churned" after a switch was unavailable for an extended period of time Currently, when you configure an 802.3ad network bond and the switch is down for an extended period of time, Red Hat Enterprise Linux properly shows the status of the bond as "Churned", even after the connection returns to a working state. However, this is the intended behavior, as the "Churned" status aims to tell the administrator that a significant link outage occurred. To clear this status, restart the network bond or reboot the host. (BZ#1708807) The ebtables command does not support broute table The nftables -based ebtables command in Red Hat Enterprise Linux 8.0 does not support the broute table. Consequently, users can not use this feature. (BZ#1649790) IPsec network traffic fails during IPsec offloading when GRO is disabled IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails. To work around this problem, keep GRO enabled on the device. (BZ#1649647) NetworkManager now uses the internal DHCP plug-in by default NetworkManager supports the internal and dhclient DHCP plug-ins. By default, NetworkManager in Red Hat Enterprise Linux (RHEL) 7 uses the dhclient and RHEL 8 the internal plug-in. In certain situations, the plug-ins behave differently. For example, dhclient can use additional settings specified in the /etc/dhcp/ directory. If you upgrade from RHEL 7 to RHEL 8 and NetworkManager behaves different, add the following setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file to use the dhclient plug-in: (BZ#1571655) Advanced options of IPsec based VPN cannot be changed using gnome-control-center When configuring an IPsec based VPN connection using the gnome-control-center application, the Advanced dialog will only display the configuration, but will not allow doing any change. As a consequence, users cannot change any advanced IPsec options. To work around this problem, use the nm-connection-editor or nmcli tools to perform configuration of the advanced properties. ( BZ#1697326 ) The /etc/hosts.allow and /etc/hosts.deny files contain inaccurate information The tcp_wrappers package is removed in Red Hat Enterprise Linux (RHEL) 8, but not its files, /etc/hosts.allow and /etc/hosts.deny. As a consequence, these files contain outdated information, which is not applicable for RHEL 8. To work around this problem, use firewall rules for filtering access to the services. For filtering based on usernames and hostnames, use the application-specific configuration. ( BZ#1663556 ) IP defragmentation cannot be sustainable under network traffic overload In Red Hat Enterprise Linux 8, the garbage collection kernel thread has been removed and IP fragments expire only on timeout. As a result, CPU usage under Denial of Service (DoS) is much lower, and the maximum sustainable fragments drop rate is limited by the amount of memory configured for the IP reassembly unit. With the default settings workloads requiring fragmented traffic in presence of packet drops, packet reorder or many concurrent fragmented flows may incur in relevant performance regression. In this case, users can use the appropriate tuning of the IP fragmentation cache in the /proc/sys/net/ipv4 directory setting the ipfrag_high_thresh variable to limit the amount of memory and the ipfrag_time variable to keep per seconds an IP fragment in memory. For example, echo 419430400 > /proc/sys/net/ipv4/ipfrag_high_thresh echo 1 > /proc/sys/net/ipv4/ipfrag_time The above applies to IPv4 traffic. For IPv6 the relevant tunables are: ip6frag_high_thresh and ip6frag_time in the /proc/sys/net/ipv6/ directory. Note that any workload relying on high-speed fragmented traffic can cause stability and performance issues, especially with packet drops, and such kind of deployments are highly discouraged in production. (BZ#1597671) Network interface name changes in RHEL 8 In Red Hat Enterprise Linux 8, the same consistent network device naming scheme is used by default as in RHEL 7. However, some kernel drivers, such as e1000e , nfp , qede , sfc , tg3 and bnxt_en changed their consistent name on a fresh installation of RHEL 8. However, the names are preserved on upgrade from RHEL 7. ( BZ#1701968 ) 5.6.15. Security libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) libssh does not comply with the system-wide crypto policy The libssh library does not follow system-wide cryptographic policy settings. As a consequence, the set of supported algorithms is not changed when the administrator changes the crypto policies level using the update-crypto-policies command. To work around this problem, the set of advertised algorithms needs to be set individually by every application that uses libssh . As a result, when the system is set to the LEGACY or FUTURE policy level, applications that use libssh behave inconsistently when compared to OpenSSH . (BZ#1646563) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) OpenSCAP rpmverifypackage does not work correctly The chdir and chroot system calls are called twice by the rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content. To work around this problem, do not use the rpmverifypackage_test OVAL test in your content or use only the content from the scap-security-guide package where rpmverifypackage_test is not used. (BZ#1646197) SCAP Workbench fails to generate results-based remediations from tailored profiles The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool: To work around this problem, use the oscap command with the --tailoring-file option. (BZ#1640715) Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap which might cause confusion. That is done to preserve backward compatibility with Red Hat Enterprise Linux 7. (BZ#1665082) OpenSCAP rpmverifyfile does not work The OpenSCAP scanner does not correctly change the current working directory in offline mode, and the fchdir function is not called with the correct arguments in the OpenSCAP rpmverifyfile probe. Consequently, scanning arbitrary file systems using the oscap-chroot command fails if rpmverifyfile_test is used in an SCAP content. As a result, oscap-chroot aborts in the described scenario. (BZ#1636431) OpenSCAP does not provide offline scanning of virtual machines and containers Refactoring of OpenSCAP codebase caused certain RPM probes to fail to scan VM and containers file systems in offline mode. For that reason, the following tools were removed from the openscap-utils package: oscap-vm and oscap-chroot . Also, the openscap-containers package was completely removed. (BZ#1618489) A utility for security and compliance scanning of containers is not available In Red Hat Enterprise Linux 7, the oscap-docker utility can be used for scanning of Docker containers based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related OpenSCAP commands are not available. As a result, oscap-docker or an equivalent utility for security and compliance scanning of containers is not available in RHEL 8 at the moment. (BZ#1642373) The OpenSSL TLS library does not detect if the PKCS#11 token supports creation of raw RSA or RSA-PSS signatures The TLS-1.3 protocol requires the support for RSA-PSS signature. If the PKCS#11 token does not support raw RSA or RSA-PSS signatures, the server applications which use OpenSSL TLS library will fail to work with the RSA key if it is held by the PKCS#11 token. As a result, TLS communication will fail. To work around this problem, configure server or client to use the TLS-1.2 version as the highest TLS protocol version available. ( BZ#1681178 ) Apache httpd fails to start if it uses an RSA private key stored in a PKCS#11 device and an RSA-PSS certificate The PKCS#11 standard does not differentiate between RSA and RSA-PSS key objects and uses the CKK_RSA type for both. However, OpenSSL uses different types for RSA and RSA-PSS keys. As a consequence, the openssl-pkcs11 engine cannot determine which type should be provided to OpenSSL for PKCS#11 RSA key objects. Currently, the engine sets the key type as RSA keys for all PKCS#11 CKK_RSA objects. When OpenSSL compares the types of an RSA-PSS public key obtained from the certificate with the type contained in an RSA private key object provided by the engine, it concludes that the types are different. Therefore, the certificate and the private key do not match. The check performed in the X509_check_private_key() OpenSSL function returns an error in this scenario. The httpd web server calls this function in its startup process to check if the provided certificate and key match. Since this check always fails for a certificate containing an RSA-PSS public key and a RSA private key stored in the PKCS#11 module, httpd fails to start using this configuration. There is no workaround available for this issue. ( BZ#1664802 ) httpd fails to start if it uses an ECDSA private key without corresponding public key stored in a PKCS#11 device Unlike RSA keys, ECDSA private keys do not necessarily contain public key information. In this case, you cannot obtain the public key from an ECDSA private key. For this reason, a PKCS#11 device stores public key information in a separate object whether it is a public key object or a certificate object. OpenSSL expects the EVP_PKEY structure provided by an engine for a private key to contain the public key information. When filling the EVP_PKEY structure to be provided to OpenSSL, the engine in the openssl-pkcs11 package tries to fetch the public key information only from matching public key objects and ignores the present certificate objects. When OpenSSL requests an ECDSA private key from the engine, the provided EVP_PKEY structure does not contain the public key information if the public key is not present in the PKCS#11 device, even when a matching certificate that contains the public key is available. As a consequence, since the Apache httpd web server calls the X509_check_private_key() function, which requires the public key, in its start-up process, httpd fails to start in this scenario. To work around the problem, store both the private and public key in the PKCS#11 device when using ECDSA keys. As a result, httpd starts correctly when ECDSA keys are stored in the PKCS#11 device. ( BZ#1664807 ) OpenSSH does not handle PKCS #11 URIs for keys with mismatching labels correctly The OpenSSH suite can identify key pairs by a label. The label might differ on private and public keys stored on a smart card. Consequently, specifying PKCS #11 URIs with the object part (key label) can prevent OpenSSH from finding appropriate objects in PKCS #11. To work around this problem, specify PKCS #11 URIs without the object part. As a result, OpenSSH is able to use keys on smart cards referenced using PKCS #11 URIs. (BZ#1671262) Output of iptables-ebtables is not 100% compatible with ebtables In RHEL 8, the ebtables command is provided by the iptables-ebtables package, which contains an nftables -based reimplementation of the tool. This tool has a different code base, and its output deviates in aspects, which are either negligible or deliberate design choices. Consequently, when migrating your scripts parsing some ebtables output, adjust the scripts to reflect the following: MAC address formatting has been changed to be fixed in length. Where necessary, individual byte values contain a leading zero to maintain the format of two characters per octet. Formatting of IPv6 prefixes has been changed to conform with RFC 4291. The trailing part after the slash character no longer contains a netmask in the IPv6 address format but a prefix length. This change applies to valid (left-contiguous) masks only, while others are still printed in the old formatting. ( BZ#1674536 ) curve25519-sha256 is not supported by default in OpenSSH The curve25519-sha256 SSH key exchange algorithm is missing in the system-wide crypto policies configurations for the OpenSSH client and server even though it is compliant with the default policy level. As a consequence, if a client or a server uses curve25519-sha256 and this algorithm is not supported by the host, the connection might fail. To work around this problem, you can manually override the configuration of system-wide crypto policies by modifying the openssh.config and opensshserver.config files in the /etc/crypto-policies/back-ends/ directory for the OpenSSH client and server. Note that this configuration is overwritten with every change of system-wide crypto policies. See the update-crypto-policies(8) man page for more information. ( BZ#1678661 ) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. ( BZ#1685470 ) SSH connections with VMware-hosted systems do not work The current version of the OpenSSH suite introduces a change of the default IP Quality of Service (IPQoS) flags in SSH packets, which is not correctly handled by the VMware virtualization platform. Consequently, it is not possible to establish an SSH connection with systems on VMware. To work around this problem, include the IPQoS=throughput in the ssh_config file. As a result, SSH connections with VMware-hosted systems work correctly. See the RHEL 8 running in VMWare Workstation unable to connect via SSH to other hosts Knowledgebase solution article for more information. (BZ#1651763) 5.6.16. Subscription management No message is printed for the successful setting and unsetting of service-level When the candlepin service does not have a 'syspurpose' functionality, subscription manager uses a different code path to set the service-level argument. This code path does not print the result of the operation. As a consequence, no message is displayed when the service level is set by subscription manager. This is especially problematic when the service-level set has a typo or is not truly available. ( BZ#1661414 ) syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. ( BZ#1687900 ) 5.6.17. Virtualization ESXi virtual machines that were customized using cloud-init and cloned boot very slowly Currently, if the cloud-init service is used to modify a virtual machine (VM) that runs on the VMware ESXi hypervisor to use static IP and the VM is then cloned, the new cloned VM in some cases takes a very long time to reboot. This is caused cloud-init rewriting the VM's static IP to DHCP and then searching for an available datasource. To work around this problem, you can uninstall cloud-init after the VM is booted for the first time. As a result, the subsequent reboots will not be slowed down. (BZ#1666961, BZ#1706482 ) Enabling nested virtualization blocks live migration Currently, the nested virtualization feature is incompatible with live migration. Therefore, enabling nested virtualization on a RHEL 8 host prevents migrating any virtual machines (VMs) from the host, as well as saving VM state snapshots to disk. Note that nested virtualization is currently provided as a Technology Preview in RHEL 8, and is therefore not supported. In addition, nested virtualization is disabled by default. If you want to enable it, use the kvm_intel.nested or kvm_amd.nested module parameters. ( BZ#1689216 ) Using cloud-init to provision virtual machines on Microsoft Azure fails Currently, it is not possible to use the cloud-init utility to provision a RHEL 8 virtual machine (VM) on the Microsoft Azure platform. To work around this problem, use one of the following methods: Use the WALinuxAgent package instead of cloud-init to provision VMs on Microsoft Azure. Add the following setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file: (BZ#1641190) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host. (BZ#1583445) virsh iface-\* commands do not work consistently Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications. (BZ#1664592) Linux virtual machine extensions for Azure sometimes do not work RHEL 8 does not include the python2 package by default. As a consequence, running Linux virtual machine extensions for Azure, also known as azure-linux-extensions , on a RHEL 8 VM in some cases fails. To increase the probability that azure-linux-extensions will work as expected, install python2 on the RHEL 8 VM manually: # yum install python2 (BZ#1561132) 5.6.18. Supportability redhat-support-tool does not collect sosreport automatically from opencase The redhat-support-tool command cannot create a sosreport archive. To work around this problem, run the sosreport command separately and then enter the redhat-support-tool addattachment -c command to upload the archive or use web UI on the Customer Portal. As a result, a case will be created and sosreport will be uploaded. Note that the findkerneldebugs , btextract , analyze diagnose commands do not work as expected and will be fixed in future releases. ( BZ#1688274 )
[ "pathfix.py -pn -i %{__python3} PATH", "[mysqld] default_authentication_plugin=caching_sha2_password", "gsettings set org.gnome.mutter experimental-features \"['scale-monitor-framebuffer']\"", "p11_uri = library-description=OpenSC%20smartcard%20framework;slot-id=2", "mkfs.xfs -m reflink=0 block-device", "lpfc_enable_fc4_type=3", "qla2xxx.ql2xnvmeenable=1", "iptables --version iptables v1.8.0 (nf_tables)", "iptables --version iptables v1.8.0 (legacy)", "| % iptables-translate -A INPUT -j CHECKSUM --checksum-fill | nft # -A INPUT -j CHECKSUM --checksum-fill", "| % sudo iptables-save >/tmp/iptables.dump | % iptables-restore-translate -f /tmp/iptables.dump | # Translated by iptables-restore-translate v1.8.0 on Wed Oct 17 17:00:13 2018 | add table ip nat |", "xmlsec1 verify --trusted-pem /etc/pki/swid/CA/redhat.com/redhatcodesignca.cert /usr/share/redhat.com/com.redhat.RHEL-8-x86_64.swidtag", "systemctl mask [email protected]", "semanage boolean -l", "allow source_domain target_type:process2 { nnp_transition nosuid_transition };", "allow init_t fprintd_t:process2 { nnp_transition nosuid_transition };", "xfs_info /mount-point | grep ftype", "<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>", "~]# yum install network-scripts", "update-crypto-policies --set LEGACY", "\"Failed to add rule for system call ...\"", "journalctl -b _PID=1", "trap keybd_trap KEYBD", "dracut_args --omit-drivers \"radeon\" force_rebuild 1", "ethtool --set-priv-flags <ethX> disable-source-pruning on", "enable_files_domain=False", "rm -r /root/.cache/ipa", "probe kernel.function(\"can_nice\").call { }", "global in_can_nice% probe kernel.function(\"can_nice\").call { in_can_nice[tid()] ++; if (in_can_nice[tid()] > 1) { next } /* code for real probe handler */ } probe kernel.function(\"can_nice\").return { in_can_nice[tid()] --; }", "mkfs.xfs -m reflink=0 block-device", "[main] dhcp=dhclient", "dnf module enable libselinux-python dnf install libselinux-python", "dnf module install libselinux-python:2.8/common", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL", "Error generating remediation role .../remediation.sh: Exit code of oscap was 1: [output truncated]", "SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2", "[main] dhcp=dhclient", "The guest operating system reported that it failed with the following error code: 0x1E" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/RHEL-8_0_0_release
Chapter 13. Managing Indexes
Chapter 13. Managing Indexes Indexing makes searching for and retrieving information easier by classifying and organizing attributes or values. This chapter describes the searching algorithm itself, placing indexing mechanisms in context, and then describes how to create, delete, and manage indexes. 13.1. About Indexes This section provides an overview of indexing in Directory Server. It contains the following topics: Section 13.1.1, "About Index Types" Section 13.1.2, "About Default and Database Indexes" Section 13.1.3, "Overview of the Searching Algorithm" Section 13.1.5, "Balancing the Benefits of Indexing" 13.1.1. About Index Types Indexes are stored in files in the directory's databases. The names of the files are based on the indexed attribute, not the type of index contained in the file. Each index file may contain multiple types of indexes if multiple indexes are maintained for the specific attribute. For example, all indexes maintained for the common name attribute are contained in the cn.db file. Directory Server supports the following types of index: Presence index (pres) contains a list of the entries that contain a particular attribute, which is very useful for searches. For example, it makes it easy to examine any entries that contain access control information. Generating an aci.db file that includes a presence index efficiently performs the search for ACI=* to generate the access control list for the server. Equality index (eq) improves searches for entries containing a specific attribute value. For example, an equality index on the cn attribute allows a user to perform the search for cn=Babs Jensen far more efficiently. Approximate index (approx) is used for efficient approximate or sounds-like searches. For example, an entry may include the attribute value cn=Firstname M Lastname . An approximate search would return this value for searches against cn~=Firstname Lastname , cn~=Firstname , or cn~=Lastname . Similarly, a search against l~=San Fransisco (note the misspelling) would return entries including l=San Francisco . Substring index (sub) is a costly index to maintain, but it allows efficient searching against substrings within entries. Substring indexes are limited to a minimum of three characters for each entry. For example, searches of the form cn=*derson , match the common names containing strings such as Bill Anderson , Jill Henderson , or Steve Sanderson . Similarly, the search for telephoneNumber= *555* returns all the entries in the directory with telephone numbers that contain 555 . International index speeds up searches for information in international directories. The process for creating an international index is similar to the process for creating regular indexes, except that it applies a matching rule by associating an object identifier (OID) with the attributes to be indexed. The supported locales and their associated OIDs are listed in Appendix D, Internationalization . If there is a need to configure the Directory Server to accept additional matching rules, contact Red Hat Consulting. 13.1.2. About Default and Database Indexes Directory Server contains a set of default indexes. When you create a new database, Directory Server copies these default indexes from cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config to the new database. Then the database only uses the copy of these indexes, which are stored in cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . Note Directory Server does not replicate settings in the cn=config entry. Therefore, you can configure indexes differently on servers that are part of a replication topology. For example, in an environment with cascading replication, you do not need to create custom indexes on a hub, if clients do not read data from the hub. To display the Directory Server default indexes: Note If you update the default index settings stored in cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , the changes are not applied to the individual databases in cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . To display the indexes of an individual database: 13.1.3. Overview of the Searching Algorithm Indexes are used to speed up searches. To understand how the directory uses indexes, it helps to understand the searching algorithm. Each index contains a list of attributes (such as the cn , common name, attribute) and a list of IDs of the entries which contain the indexed attribute value: An LDAP client application sends a search request to the directory. The directory examines the incoming request to make sure that the specified base DN matches a suffix contained by one or more of its databases or database links. If they do match, the directory processes the request. If they do not match, the directory returns an error to the client indicating that the suffix does not match. If a referral has been specified in the nsslapd-referral attribute under cn=config , the directory also returns the LDAP URL where the client can attempt to pursue the request. The Directory Server examines the search filter to see what indexes apply, and it attempts to load the list of entry IDs from each index that satisfies the filter. The ID lists are combined based on whether the filter used AND or OR joins. Each filter component is handled independently and returns an ID list. If the list of entry IDs is larger than the configured ID list scan limit or if there is no index defined for the attribute, then Directory Server sets the results for this filtercomponent to allids . If, after applying the logical operations to the results of the individual search components the list is still ALLIDs it searches every entry in the database. This is an unindexed search. The Directory Server reads every entry from the id2entry.db database or the entry cache for every entry ID in the ID list (or from the entire database for an unindexed search). The server then checks the entries to see if they match the search filter. Each match is returned as it is found. The server continues through the list of IDs until it has searched all candidate entries or until it hits one of the configured resource limits. (Resource limits are listed in Section 14.5.3, "Setting User and Global Resource Limits Using the Command Line" .) Note It's possible to set separate resource limits for searches using the simple paged results control. For example, administrators can set high or unlimited size and look-through limits with paged searches, but use the lower default limits for non-paged searches. 13.1.4. Approximate Searches In addition, the directory uses a variation of the metaphone phonetic algorithm to perform searches on an approximate index. Each value is treated as a sequence of words, and a phonetic code is generated for each word. Note The metaphone phonetic algorithm in Directory Server supports only US-ASCII letters. Therefore, use approximate indexing only with English values. Values entered on an approximate search are similarly translated into a sequence of phonetic codes. An entry is considered to match a query if both of the following are true: All of the query string codes match the codes generated in the entry string. All of the query string codes are in the same order as the entry string codes. Name in the Directory (Phonetic Code) Query String (Phonetic code) Match Comments Alice B Sarette (ALS B SRT) Alice Sarette (ALS SRT) Matches. Codes are specified in the correct order. Alice Sarrette (ALS SRT) Matches. Codes are specified in the correct order, despite the misspelling of Sarette. Surette (SRT) Matches. The generated code exists in the original name, despite the misspelling of Sarette. Bertha Sarette (BR0 SRT) No match. The code BR0 does not exist in the original name. Sarette, Alice (SRT ALS) No match. The codes are not specified in the correct order. 13.1.5. Balancing the Benefits of Indexing Before creating new indexes, balance the benefits of maintaining indexes against the costs. Approximate indexes are not efficient for attributes commonly containing numbers, such as telephone numbers. Substring indexes do not work for binary attributes. Equality indexes should be avoided if the value is big (such as attributes intended to contain photographs or passwords containing encrypted data). Maintaining indexes for attributes not commonly used in a search increases overhead without improving global searching performance. Attributes that are not indexed can still be specified in search requests, although the search performance may be degraded significantly, depending on the type of search. The more indexes you maintain, the more disk space you require. Indexes can become very time-consuming. For example: The Directory Server receives an add or modify operation. The Directory Server examines the indexing attributes to determine whether an index is maintained for the attribute values. If the created attribute values are indexed, then Directory Server adds or deletes the new attribute values from the index. The actual attribute values are created in the entry. For example, the Directory Server adds the entry: The Directory Server maintains the following indexes: Equality, approximate, and substring indexes for cn (common name) and sn (surname) attributes. Equality and substring indexes for the telephone number attribute. Substring indexes for the description attribute. When adding that entry to the directory, the Directory Server must perform these steps: Create the cn equality index entry for John and John Doe . Create the appropriate cn approximate index entries for John and John Doe . Create the appropriate cn substring index entries for John and John Doe . Create the sn equality index entry for Doe . Create the appropriate sn approximate index entry for Doe . Create the appropriate sn substring index entries for Doe . Create the telephone number equality index entry for 408 555 8834 . Create the appropriate telephone number substring index entries for 408 555 8834 . Create the appropriate description substring index entries for Manufacturing lead for the Z238 line of widgets . A large number of substring entries are generated for this string. As this example shows, the number of actions required to create and maintain databases for a large directory can be resource-intensive. 13.1.6. Indexing Limitations You cannot index virtual attributes, such as nsrole and cos_attribute . Virtual attributes contain computed values. If you index these attributes, Directory Server can return an invalid set of entries to direct and internal searches.
[ "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config\" '(objectClass=nsindex)'", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend index list database_name", "dn: cn=John Doe,ou=People,dc=example,dc=com objectclass: top objectClass: person objectClass: orgperson objectClass: inetorgperson cn: John Doe cn: John sn: Doe ou: Manufacturing ou: people telephoneNumber: 408 555 8834 description: Manufacturing lead for the Z238 line of widgets." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/managing_indexes
11.4. Moving Swap Space
11.4. Moving Swap Space To move swap space from one location to another, follow the steps for removing swap space, and then follow the steps for adding swap space.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Swap_Space-Moving_Swap_Space
function::proc_mem_rss_pid
function::proc_mem_rss_pid Name function::proc_mem_rss_pid - Program resident set size in pages Synopsis Arguments pid The pid of process to examine Description Returns the resident set size in pages of the given process, or zero when the process doesn't exist or the number of pages couldn't be retrieved.
[ "proc_mem_rss_pid:long(pid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-rss-pid
Chapter 1. Red Hat OpenShift support for Windows Containers overview
Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. For more information, see the release notes . For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/windows-container-overview
Chapter 109. Checking IdM replication using Healthcheck
Chapter 109. Checking IdM replication using Healthcheck You can test Identity Management (IdM) replication using the Healthcheck tool. Prerequisites You are using RHEL version 8.1 or newer. 109.1. Replication healthcheck tests The Healthcheck tool tests the Identity Management (IdM) topology configuration and searches for replication conflict issues. To list all tests, run the ipa-healthcheck with the --list-sources option: The topology tests are placed under the ipahealthcheck.ipa.topology and ipahealthcheck.ds.replication sources: IPATopologyDomainCheck This test verifies: That no single server is disconnected from the topology. That servers do not have more than the recommended number of replication agreements. If the test succeeds, the test returns the configured domains. Otherwise, specific connection errors are reported. Note The test runs the ipa topologysuffix-verify command for the domain suffix. It also runs the command for the ca suffix if the IdM Certificate Authority server role is configured on this server. ReplicationConflictCheck The test searches for entries in LDAP matching (&(!(objectclass=nstombstone))(nsds5ReplConflict=*)) . Note Run these tests on all IdM servers when trying to check for issues. Additional resources Solving common replication problems 109.2. Screening replication using Healthcheck Follow this procedure to run a standalone manual test of an Identity Management (IdM) replication topology and configuration using the Healthcheck tool. The Healthcheck tool includes many tests. Therefore, you can shorten the results with: Replication conflict test: --source=ipahealthcheck.ds.replication Correct topology test: --source=ipahealthcheck.ipa.topology Prerequisites You are logged in as the root user. Procedure To run Healthcheck replication conflict and topology checks, enter: Four different results are possible: SUCCESS - the test passed successfully. WARNING - the test passed but there might be a problem. ERROR - the test failed. CRITICAL - the test failed and it affects the IdM server functionality. Additional resources man ipa-healthcheck 109.3. Additional resources Healthcheck in IdM
[ "ipa-healthcheck --list-sources", "ipa-healthcheck --source=ipahealthcheck.ds.replication --source=ipahealthcheck.ipa.topology", "{ \"source\": \"ipahealthcheck.ipa.topology\", \"check\": \"IPATopologyDomainCheck\", \"result\": \"SUCCESS\", \"kw\": { \"suffix\": \"domain\" } }", "{ \"source\": \"ipahealthcheck.ipa.topology\", \"check\": \"IPATopologyDomainCheck\", \"result\": \"ERROR\", \"uuid\": d6ce3332-92da-423d-9818-e79f49ed321f \"when\": 20191007115449Z \"duration\": 0.005943 \"kw\": { \"msg\": \"topologysuffix-verify domain failed, server2 is not connected (server2_139664377356472 in MainThread)\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/checking-idm-replication-using-healthcheck_configuring-and-managing-idm
Managing Fuse on OpenShift
Managing Fuse on OpenShift Red Hat Fuse 7.13 Manage Fuse applications with the Fuse Console Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/index
Chapter 8. Open source license
Chapter 8. Open source license Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your" ) shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: You must give any other recipients of the Work or Derivative Works a copy of this License; and You must cause any modified files to carry prominent notices stating that You changed the files; and You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/assembly-open-source-license
Chapter 8. Deploy an AWS Route 53 loadbalancer
Chapter 8. Deploy an AWS Route 53 loadbalancer This topic describes the procedure required to configure DNS based failover for Multi-AZ Red Hat build of Keycloak clusters using AWS Route53 for an active/passive setup. These instructions are intended to be used with the setup described in the Concepts for active-passive deployments chapter. Use it together with the other building blocks outlined in the Building blocks active-passive deployments chapter. Note We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization's standards and security best practices. 8.1. Architecture All Red Hat build of Keycloak client requests are routed by a DNS name managed by Route53 records. Route53 is responsibile to ensure that all client requests are routed to the Primary cluster when it is available and healthy, or to the backup cluster in the event of the primary availability-zone or Red Hat build of Keycloak deployment failing. If the primary site fails, the DNS changes will need to propagate to the clients. Depending on the client's settings, the propagation may take some minutes based on the client's configuration. When using mobile connections, some internet providers might not respect the TTL of the DNS entries, which can lead to an extended time before the clients can connect to the new site. Figure 8.1. AWS Global Accelerator Failover Two Openshift Routes are exposed on both the Primary and Backup ROSA cluster. The first Route uses the Route53 DNS name to service client requests, whereas the second Route is used by Route53 to monitor the health of the Red Hat build of Keycloak cluster. 8.2. Prerequisites Deployment of Red Hat build of Keycloak as described in Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator on a ROSA cluster running OpenShift 4.14 or later in two AWS availability zones in AWS one region. An owned domain for client requests to be routed through. 8.3. Procedure Create a Route53 Hosted Zone using the root domain name through which you want all Red Hat build of Keycloak clients to connect. Take note of the "Hosted zone ID", because this ID is required in later steps. Retrieve the "Hosted zone ID" and DNS name associated with each ROSA cluster. For both the Primary and Backup cluster, perform the following steps: Log in to the ROSA cluster. Retrieve the cluster LoadBalancer Hosted Zone ID and DNS hostname Command: HOSTNAME=USD(oc -n openshift-ingress get svc router-default \ -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers \ --query "LoadBalancers[?DNSName=='USD{HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}" \ --region eu-west-1 \ 1 --output json 1 The AWS region hosting your ROSA cluster Output: [ { "CanonicalHostedZoneId": "Z2IFOLAFXWLO4F", "DNSName": "ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com" } ] Note ROSA clusters running OpenShift 4.13 and earlier use classic load balancers instead of application load balancers. Use the aws elb describe-load-balancers command and an updated query string instead. Create Route53 health checks Command: function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=(USD(echo USD1 | sha1sum )) aws route53 create-health-check \ --caller-reference "USDREF" \ --query "HealthCheck.Id" \ --no-cli-pager \ --output text \ --health-check-config ' { "Type": "HTTPS", "ResourcePath": "/lb-check", "FullyQualifiedDomainName": "'USD1'", "Port": 443, "RequestInterval": 30, "FailureThreshold": 1, "EnableSNI": true } ' } CLIENT_DOMAIN="client.keycloak-benchmark.com" 1 PRIMARY_DOMAIN="primary.USD{CLIENT_DOMAIN}" 2 BACKUP_DOMAIN="backup.USD{CLIENT_DOMAIN}" 3 createHealthCheck USD{PRIMARY_DOMAIN} createHealthCheck USD{BACKUP_DOMAIN} 1 The domain which Red Hat build of Keycloak clients should connect to. This should be the same, or a subdomain, of the root domain used to create the Hosted Zone . 2 The subdomain that will be used for health probes on the Primary cluster 3 The subdomain that will be used for health probes on the Backup cluster Output: 233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2 1 The ID of the Primary Health check 2 The ID of the Backup Health check Create the Route53 record set Command: HOSTED_ZONE_ID="Z09084361B6LKQQRCVBEY" 1 PRIMARY_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets \ --hosted-zone-id Z09084361B6LKQQRCVBEY \ --query "ChangeInfo.Id" \ --output text \ --change-batch ' { "Comment": "Creating Record Set for 'USD{CLIENT_DOMAIN}'", "Changes": [{ "Action": "CREATE", "ResourceRecordSet": { "Name": "'USD{PRIMARY_DOMAIN}'", "Type": "A", "AliasTarget": { "HostedZoneId": "'USD{PRIMARY_LB_HOSTED_ZONE_ID}'", "DNSName": "'USD{PRIMARY_LB_DNS}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'USD{BACKUP_DOMAIN}'", "Type": "A", "AliasTarget": { "HostedZoneId": "'USD{BACKUP_LB_HOSTED_ZONE_ID}'", "DNSName": "'USD{BACKUP_LB_DNS}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'USD{CLIENT_DOMAIN}'", "Type": "A", "SetIdentifier": "client-failover-primary-'USD{SUBDOMAIN}'", "Failover": "PRIMARY", "HealthCheckId": "'USD{PRIMARY_HEALTH_ID}'", "AliasTarget": { "HostedZoneId": "'USD{HOSTED_ZONE_ID}'", "DNSName": "'USD{PRIMARY_DOMAIN}'", "EvaluateTargetHealth": true } } }, { "Action": "CREATE", "ResourceRecordSet": { "Name": "'USD{CLIENT_DOMAIN}'", "Type": "A", "SetIdentifier": "client-failover-backup-'USD{SUBDOMAIN}'", "Failover": "SECONDARY", "HealthCheckId": "'USD{BACKUP_HEALTH_ID}'", "AliasTarget": { "HostedZoneId": "'USD{HOSTED_ZONE_ID}'", "DNSName": "'USD{BACKUP_DOMAIN}'", "EvaluateTargetHealth": true } } }] } ' 1 The ID of the Hosted Zone created earlier Output: Wait for the Route53 records to be updated Command: aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI Update or create the Red Hat build of Keycloak deployment For both the Primary and Backup cluster, perform the following steps: Log in to the ROSA cluster Ensure the Keycloak CR has the following configuration apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USD{CLIENT_DOMAIN} 1 1 The domain clients used to connect to Red Hat build of Keycloak To ensure that request forwarding works, edit the Red Hat build of Keycloak CR to specify the hostname through which clients will access the Red Hat build of Keycloak instances. This hostname must be the USDCLIENT_DOMAIN used in the Route53 configuration. Create health check Route Command: cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: USDDOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF 1 USDNAMESPACE should be replaced with the namespace of your Red Hat build of Keycloak deployment 2 USDDOMAIN should be replaced with either the PRIMARY_DOMAIN or BACKUP_DOMAIN , if the current cluster is the Primary of Backup cluster, respectively. 8.4. Verify Navigate to the chosen CLIENT_DOMAIN in your local browser and log in to the Red Hat build of Keycloak console. To test failover works as expected, log in to the Primary cluster and scale the Red Hat build of Keycloak deployment to zero Pods. Scaling will cause the Primary's health checks to fail and Route53 should start routing traffic to the Red Hat build of Keycloak Pods on the Backup cluster.
[ "HOSTNAME=USD(oc -n openshift-ingress get svc router-default -o jsonpath='{.status.loadBalancer.ingress[].hostname}' ) aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='USD{HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}\" --region eu-west-1 \\ 1 --output json", "[ { \"CanonicalHostedZoneId\": \"Z2IFOLAFXWLO4F\", \"DNSName\": \"ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com\" } ]", "function createHealthCheck() { # Creating a hash of the caller reference to allow for names longer than 64 characters REF=(USD(echo USD1 | sha1sum )) aws route53 create-health-check --caller-reference \"USDREF\" --query \"HealthCheck.Id\" --no-cli-pager --output text --health-check-config ' { \"Type\": \"HTTPS\", \"ResourcePath\": \"/lb-check\", \"FullyQualifiedDomainName\": \"'USD1'\", \"Port\": 443, \"RequestInterval\": 30, \"FailureThreshold\": 1, \"EnableSNI\": true } ' } CLIENT_DOMAIN=\"client.keycloak-benchmark.com\" 1 PRIMARY_DOMAIN=\"primary.USD{CLIENT_DOMAIN}\" 2 BACKUP_DOMAIN=\"backup.USD{CLIENT_DOMAIN}\" 3 createHealthCheck USD{PRIMARY_DOMAIN} createHealthCheck USD{BACKUP_DOMAIN}", "233e180f-f023-45a3-954e-415303f21eab 1 799e2cbb-43ae-4848-9b72-0d9173f04912 2", "HOSTED_ZONE_ID=\"Z09084361B6LKQQRCVBEY\" 1 PRIMARY_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab BACKUP_LB_HOSTED_ZONE_ID=\"Z2IFOLAFXWLO4F\" BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912 aws route53 change-resource-record-sets --hosted-zone-id Z09084361B6LKQQRCVBEY --query \"ChangeInfo.Id\" --output text --change-batch ' { \"Comment\": \"Creating Record Set for 'USD{CLIENT_DOMAIN}'\", \"Changes\": [{ \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{PRIMARY_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{PRIMARY_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{BACKUP_DOMAIN}'\", \"Type\": \"A\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{BACKUP_LB_HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_LB_DNS}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-primary-'USD{SUBDOMAIN}'\", \"Failover\": \"PRIMARY\", \"HealthCheckId\": \"'USD{PRIMARY_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{PRIMARY_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }, { \"Action\": \"CREATE\", \"ResourceRecordSet\": { \"Name\": \"'USD{CLIENT_DOMAIN}'\", \"Type\": \"A\", \"SetIdentifier\": \"client-failover-backup-'USD{SUBDOMAIN}'\", \"Failover\": \"SECONDARY\", \"HealthCheckId\": \"'USD{BACKUP_HEALTH_ID}'\", \"AliasTarget\": { \"HostedZoneId\": \"'USD{HOSTED_ZONE_ID}'\", \"DNSName\": \"'USD{BACKUP_DOMAIN}'\", \"EvaluateTargetHealth\": true } } }] } '", "/change/C053410633T95FR9WN3YI", "aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USD{CLIENT_DOMAIN} 1", "cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: route.openshift.io/v1 kind: Route metadata: name: aws-health-route spec: host: USDDOMAIN 2 port: targetPort: https tls: insecureEdgeTerminationPolicy: Redirect termination: passthrough to: kind: Service name: keycloak-service weight: 100 wildcardPolicy: None EOF" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/deploy-aws-route53-loadbalancer-
Chapter 5. Disconnected installation
Chapter 5. Disconnected installation If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection. 5.1. Prerequisites Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites: A subscription manifest that you can upload to the platform. For more information, see Obtaining a manifest file . The Ansible Automation Platform setup bundle at Customer Portal is downloaded. The DNS records for the automation controller and private automation hub servers are created. 5.2. Ansible Automation Platform installation on disconnected RHEL You can install Ansible Automation Platform without an internet connection by using the installer-managed database located on the automation controller. The setup bundle is recommended for disconnected installation because it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images. Additional Resources For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Ansible variables . 5.2.1. System requirements for disconnected installation Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. You can find these in system requirements . 5.2.2. RPM Source RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options: Reposync The RHEL Binary DVD Note The RHEL Binary DVD method requires the DVD for supported versions of RHEL. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported. Additional resources Satellite 5.3. Synchronizing RPM repositories using reposync To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server. When downloading RPM, ensure you use the applicable distro. Procedure Attach the BaseOS and AppStream required repositories: # subscription-manager repos \ --enable rhel-9-for-x86_64-baseos-rpms \ --enable rhel-9-for-x86_64-appstream-rpms Perform the reposync: # dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/download Use reposync with --download-metadata and without --newest-only . See RHEL 8 Reposync. If you are not using --newest-only, the repos downloaded may take an extended amount of time to sync due to the large number of GB. If you are using --newest-only, the repos downloaded may take an extended amount of time to sync due to the large number of GB. After the reposync is completed, your repositories are ready to use with a web server. Move the repositories to your disconnected network. 5.4. Creating a new web server to host repositories If you do not have an existing web server to host your repositories, you can create one with your synced repositories. Procedure Install prerequisites: USD sudo dnf install httpd Configure httpd to serve the repo directory: /etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch "^/+USD"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory> Ensure that the directory is readable by an apache user: USD sudo chown -R apache /path/to/repos Configure SELinux: USD sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" USD sudo restorecon -ir /path/to/repos Enable httpd: USD sudo systemctl enable --now httpd.service Open firewall: USD sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent USD sudo firewall-cmd --reload On automation services, add a repo file at /etc/yum.repos.d/local.repo , and add the optional repos if needed: [Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 5.5. Accessing RPM repositories from a locally mounted DVD If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository. Procedure Mount DVD or ISO: DVD # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd ISO # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd Create yum repo file at /etc/yum.repos.d/dvd.repo [dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Import the gpg key: # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release Note If the key is not imported you will see an error similar to # Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com] Additional Resources For further detail on setting up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8 . 5.6. Downloading and installing the Ansible Automation Platform setup bundle Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process. Procedure Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking Download Now for the Ansible Automation Platform 2.5 Setup Bundle. On control node, untar the bundle: USD tar xvf \ ansible-automation-platform-setup-bundle-2.5-1.tar.gz USD cd ansible-automation-platform-setup-bundle-2.5-1 Edit the inventory file to include variables based on your host names and desired password values. Note See section 3.2 Inventory file examples base on installation scenarios for a list of examples that best fits your scenario. 5.7. Completing post installation tasks After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly. Before your first login, you must add your subscription information to the platform. To obtain your subscription information in uploadable form, see Obtaining a manifest file in Access management and authentication . Once you have obtained your subscription manifest, see Getting started with Ansible Automation Platform for instructions on how to upload your subscription information. Now that you have successfully installed Ansible Automation Platform, to begin using its features, see the following guides for your steps: Getting started with Ansible Automation Platform . Managing automation content . Creating and using execution environments .
[ "subscription-manager repos --enable rhel-9-for-x86_64-baseos-rpms --enable rhel-9-for-x86_64-appstream-rpms", "dnf install yum-utils reposync -m --download-metadata --gpgcheck -p /path/to/download", "sudo dnf install httpd", "/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch \"^/+USD\"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>", "sudo chown -R apache /path/to/repos", "sudo semanage fcontext -a -t httpd_sys_content_t \"/path/to/repos(/.*)?\" sudo restorecon -ir /path/to/repos", "sudo systemctl enable --now httpd.service", "sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent sudo firewall-cmd --reload", "[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd", "mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd", "[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release", "Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]", "tar xvf ansible-automation-platform-setup-bundle-2.5-1.tar.gz cd ansible-automation-platform-setup-bundle-2.5-1" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/disconnected-installation
6.2. Configuration Tasks
6.2. Configuration Tasks Configuring Red Hat High Availability Add-On software with the ccs consists of the following steps: Ensuring that ricci is running on all nodes in the cluster. Refer to Section 6.3, "Starting ricci " . Creating a cluster. Refer to Section 6.4, "Creating and Modifying a Cluster" . Configuring fence devices. Refer to Section 6.5, "Configuring Fence Devices" . Configuring fencing for cluster members. Refer to Section 6.7, "Configuring Fencing for Cluster Members" . Creating failover domains. Refer to Section 6.8, "Configuring a Failover Domain" . Creating resources. Refer to Section 6.9, "Configuring Global Cluster Resources" . Creating cluster services. Refer to Section 6.10, "Adding a Cluster Service to the Cluster" . Configuring a quorum disk, if necessary. Refer to Section 6.13, "Configuring a Quorum Disk" . Configuring global cluster properties. Refer to Section 6.14, "Miscellaneous Cluster Configuration" . Propagating the cluster configuration file to all of the cluster nodes. Refer to Section 6.15, "Propagating the Configuration File to the Cluster Nodes" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-tasks-ccs-CA
2.8. Performance Issues: Check the Red Hat Customer Portal
2.8. Performance Issues: Check the Red Hat Customer Portal For information on recommendations for deploying and upgrading Red Hat Enterprise Linux clusters using the High Availability Add-On and Red Hat Global File System 2 (GFS2) see the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on the Red Hat Customer Portal at https://access.redhat.com/kb/docs/DOC-40821 .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-customer-portal
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.1, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl516 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql95 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.1.3. Running a System Service from a Software Collection Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql95 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb101 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 rhscl/devtoolset-6-perftools-rhel7 rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 rhscl/php-56-rhel7 rhscl/python-35-rhel7 rhscl/ruby-23-rhel7 The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 rhscl/devtoolset-4-perftools-rhel7 rhscl/mariadb-101-rhel7 rhscl/nginx-18-rhel7 rhscl/nodejs-4-rhel7 rhscl/postgresql-95-rhel7 rhscl/ror-42-rhel7 rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 rhscl/mongodb-26-rhel7 rhscl/mysql-56-rhel7 rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 rhscl/perl-520-rhel7 rhscl/postgresql-94-rhel7 rhscl/python-34-rhel7 rhscl/ror-41-rhel7 rhscl/ruby-22-rhel7 rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl524 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql95 bash", "~]USD echo USDX_SCLS python27 rh-postgresql95", "~]# service rh-postgresql95-postgresql start Starting rh-postgresql95-postgresql service: [ OK ] ~]# chkconfig rh-postgresql95-postgresql on", "~]USD scl enable rh-mariadb101 \"man rh-mariadb101\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.1_release_notes/chap-usage
Chapter 8. Using Ansible to manage DNS records in IdM
Chapter 8. Using Ansible to manage DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM) using an Ansible playbook. As an IdM administrator, you can add, modify, and delete DNS records in IdM. The chapter contains the following sections: Ensuring the presence of A and AAAA DNS records in IdM using Ansible Ensuring the presence of A and PTR DNS records in IdM using Ansible Ensuring the presence of multiple DNS records in IdM using Ansible Ensuring the presence of multiple CNAME records in IdM using Ansible Ensuring the presence of an SRV record in IdM using Ansible 8.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 8.2. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 8.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 8.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 8.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 8.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 8.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 8.3. Ensuring the presence of A and AAAA DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that A and AAAA records for a particular IdM host are present. In the example used in the procedure below, an IdM administrator ensures the presence of A and AAAA records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-A-and-AAAA-records-are-present.yml Ansible playbook file. For example: Open the ensure-A-and-AAAA-records-are-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable, set the name variable to host1 , and the a_ip_address variable to 192.168.122.123 . In the records variable, set the name variable to host1 , and the aaaa_ip_address variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See DNS records in IdM . See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 8.4. Ensuring the presence of A and PTR DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that an A record for a particular IdM host is present, with a corresponding PTR record. In the example used in the procedure below, an IdM administrator ensures the presence of A and PTR records for host1 with an IP address of 192.168.122.45 in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com DNS zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-dnsrecord-with-reverse-is-present.yml Ansible playbook file. For example: Open the ensure-dnsrecord-with-reverse-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the ip_address variable to 192.168.122.45 . Set the create_reverse variable to true . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See DNS records in IdM . See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 8.5. Ensuring the presence of multiple DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that multiple values are associated with a particular IdM DNS record. In the example used in the procedure below, an IdM administrator ensures the presence of multiple A records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-presence-multiple-records.yml Ansible playbook file. For example: Open the ensure-presence-multiple-records-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. In the records section, set the name variable to host1 . In the records section, set the zone_name variable to idm.example.com . In the records section, set the a_rec variable to 192.168.122.112 and to 192.168.122.122 . Define a second record in the records section: Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the aaaa_rec variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See DNS records in IdM . See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 8.6. Ensuring the presence of multiple CNAME records in IdM using Ansible A Canonical Name record (CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name, an alias, to another name, the canonical name. You may find CNAME records useful when running multiple services from a single IP address: for example, an FTP service and a web service, each running on a different port. Follow this procedure to use an Ansible playbook to ensure that multiple CNAME records are present in IdM DNS. In the example used in the procedure below, host03 is both an HTTP server and an FTP server. The IdM administrator ensures the presence of the www and ftp CNAME records for the host03 A record in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . The host03 A record exists in the idm.example.com zone. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-CNAME-record-is-present.yml Ansible playbook file. For example: Open the ensure-CNAME-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: (Optional) Adapt the description provided by the name of the play. Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable section, set the following variables and values: Set the name variable to www . Set the cname_hostname variable to host03 . Set the name variable to ftp . Set the cname_hostname variable to host03 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 8.7. Ensuring the presence of an SRV record in IdM using Ansible A DNS service (SRV) record defines the hostname, port number, transport protocol, priority and weight of a service available in a domain. In Identity Management (IdM), you can use SRV records to locate IdM servers and replicas. Follow this procedure to use an Ansible playbook to ensure that an SRV record is present in IdM DNS. In the example used in the procedure below, an IdM administrator ensures the presence of the _kerberos._udp.idm.example.com SRV record with the value of 10 50 88 idm.example.com . This sets the following values: It sets the priority of the service to 10. It sets the weight of the service to 50. It sets the port to be used by the service to 88. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-SRV-record-is-present.yml Ansible playbook file. For example: Open the ensure-SRV-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to _kerberos._udp.idm.example.com . Set the srv_rec variable to '10 50 88 idm.example.com' . Set the zone_name variable to idm.example.com . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See DNS records in IdM . See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory.
[ "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-A-and-AAAA-records-are-present.yml ensure-A-and-AAAA-records-are-present-copy.yml", "--- - name: Ensure A and AAAA records are present hosts: ipaserver become: true gather_facts: false tasks: # Ensure A and AAAA records are present - name: Ensure that 'host1' has A and AAAA records. ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: host1 a_ip_address: 192.168.122.123 - name: host1 aaaa_ip_address: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-A-and-AAAA-records-are-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-dnsrecord-with-reverse-is-present.yml ensure-dnsrecord-with-reverse-is-present-copy.yml", "--- - name: Ensure DNS Record is present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that dns record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host1 zone_name: idm.example.com ip_address: 192.168.122.45 create_reverse: true state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-dnsrecord-with-reverse-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-presence-multiple-records.yml ensure-presence-multiple-records-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that multiple dns records are present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" records: - name: host1 zone_name: idm.example.com a_rec: 192.168.122.112 a_rec: 192.168.122.122 - name: host1 zone_name: idm.example.com aaaa_rec: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-records-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-CNAME-record-is-present.yml ensure-CNAME-record-is-present-copy.yml", "--- - name: Ensure that 'www.idm.example.com' and 'ftp.idm.example.com' CNAME records point to 'host03.idm.example.com'. hosts: ipaserver become: true gather_facts: false tasks: - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: www cname_hostname: host03 - name: ftp cname_hostname: host03", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-CNAME-record-is-present.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-SRV-record-is-present.yml ensure-SRV-record-is-present-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure a SRV record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: _kerberos._udp.idm.example.com srv_rec: '10 50 88 idm.example.com' zone_name: idm.example.com state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-SRV-record-is-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_dns_in_identity_management/using-ansible-to-manage-dns-records-in-idm_working-with-dns-in-identity-management
Chapter 10. Postinstallation storage configuration
Chapter 10. Postinstallation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. 10.1. Dynamic provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning . 10.1.1. Red Hat Virtualization (RHV) object definition OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes. To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML: ovirt-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: "<boolean>" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: "<boolean>" 7 csi.storage.k8s.io/fstype: <file_system_type> 8 1 Name of the storage class. 2 Set to false if the storage class is the default storage class in the cluster. If set to true , the existing default storage class must be edited and set to false . 3 true enables dynamic volume expansion, false prevents it. true is recommended. 4 Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete . 5 Indicates how to provision and bind PersistentVolumeClaims . When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature. 6 The RHV storage domain name to use. 7 If true , the disk is thin provisioned. If false , the disk is preallocated. Thin provisioning is recommended. 8 Optional: File system type to be created. Possible values: ext4 (default) or xfs . 10.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 10.1. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 10.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 10.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 10.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 10.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 10.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 10.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 10.3. Deploy Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.12 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.12 deployment Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.12 in external mode Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Data Foundation 4.12 on VMware vSphere Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.12 using Amazon Web Services Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power infrastructure Deploying OpenShift Data Foundation on IBM Power Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z infrastructure Deploying OpenShift Data Foundation on IBM Z infrastructure Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster Monitoring Red Hat OpenShift Data Foundation 4.12 Resolve issues encountered during operations Troubleshooting OpenShift Data Foundation 4.12 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration
[ "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/post-install-storage-configuration
Chapter 20. Using the v2 UI
Chapter 20. Using the v2 UI 20.1. v2 user interface configuration Important This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags. When using the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI. There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. The v2 UI uses the standard definition of megabyte (MB) to report image manifest sizes. Procedure Log in to your deployment. In the navigation pane of your deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to new UI, and then click Use Beta Environment , for example: 20.1.1. Creating a new organization using the v2 UI Prerequisites You have toggled your deployment to use the v2 UI. Use the following procedure to create an organization using the v2 UI. Procedure Click Organization in the navigation pane. Click Create Organization . Enter an Organization Name , for example, testorg . Click Create . Now, your example organization should populate under the Organizations page. 20.1.2. Deleting an organization using the v2 UI Use the following procedure to delete an organization using the v2 UI. Procedure On the Organizations page, select the name of the organization you want to delete, for example, testorg . Click the More Actions drop down menu. Click Delete . Note On the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion. Confirm that you want to permanently delete the organization by typing confirm in the box. Click Delete . After deletion, you are returned to the Organizations page. Note You can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions Delete . 20.1.3. Creating a new repository using the v2 UI Use the following procedure to create a repository using the v2 UI. Procedure Click Repositories on the navigation pane. Click Create Repository . Select a namespace, for example, quayadmin , and then enter a Repository name , for example, testrepo . Important Do not use the following words in your repository name: * build * trigger * tag When these words are used for repository names, users are unable access the repository, and are unable to permanently delete the repository. Attempting to delete these repositories returns the following error: Failed to delete repository <repository_name>, HTTP404 - Not Found. Click Create . Now, your example repository should populate under the Repositories page. 20.1.4. Deleting a repository using the v2 UI Prerequisites You have created a repository. Procedure On the Repositories page of the v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 20.1.5. Pushing an image to the v2 UI Use the following procedure to push an image to the v2 UI. Procedure Pull a sample image from an external registry: USD podman pull busybox Tag the image: USD podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to your registry: USD podman push quay-server.example.com/quayadmin/busybox:test Navigate to the Repositories page on the v2 UI and ensure that your image has been properly pushed. You can check the security details by selecting your image tag, and then navigating to the Security Report page. 20.1.6. Deleting an image using the v2 UI Use the following procedure to delete an image using the v2 UI. Prerequisites You have pushed an image to your registry. Procedure On the Repositories page of the v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 20.1.7. Creating a new team using the Red Hat Quay v2 UI Use the following procedure to create a new team using the Red Hat Quay v2 UI. Prerequisites You have created an organization with a repository. Procedure On the Red Hat Quay v2 UI, click the name of an organization. On your organization's page, click Teams and membership . Click the Create new team box. In the Create team popup window, provide a name for your new team. Optional. Provide a description for your new team. Click Proceed . A new popup window appears. Optional. Add this team to a repository, and set the permissions to one of Read , Write , Admin , or None . Optional. Add a team member or robot account. To add a team member, enter the name of their Red Hat Quay account. Review and finish the information, then click Review and Finish . The new team appears under the Teams and membership page . From here, you can click the kebab menu, and select one of the following options: Manage Team Members . On this page, you can view all members, team members, robot accounts, or users who have been invited. You can also add a new team member by clicking Add new member . Set repository permissions . On this page, you can set the repository permissions to one of Read , Write , Admin , or None . Delete . This popup windows allows you to delete the team by clicking Delete . Optional. You can click the one of the following options to reveal more information about teams, members, and collaborators: Team View . This menu shows all team names, the number of members, the number of repositories, and the role for each team. Members View . This menu shows all usernames of team members, the teams that they are part of, the repository permissions of the user. Collaborators View . This menu shows repository collaborators. Collaborators are users that do not belong to any team in the organization, but who have direct permissions on one or more repositories belonging to the organization. 20.1.8. Creating a robot account using the v2 UI Use the following procedure to create a robot account using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Robot accounts tab Create robot account . In the Provide a name for your robot account box, enter a name, for example, robot1 . Optional. The following options are available if desired: Add the robot to a team. Add the robot to a repository. Adjust the robot's permissions. On the Review and finish page, review the information you have provided, then click Review and finish . The following alert appears: Successfully created robot account with robot name: <organization_name> + <robot_name> . Alternatively, if you tried to create a robot account with the same name as another robot account, you might receive the following error message: Error creating robot account . Optional. You can click Expand or Collapse to reveal descriptive information about the robot account. Optional. You can change permissions of the robot account by clicking the kebab menu Set repository permissions . The following message appears: Successfully updated repository permission . Optional. To delete your robot account, check the box of the robot account and click the trash can icon. A popup box appears. Type confirm in the text box, then, click Delete . Alternatively, you can click the kebab menu Delete . The following message appears: Successfully deleted robot account . 20.1.8.1. Bulk managing robot account repository access using the Red Hat Quay v2 UI Use the following procedure to manage, in bulk, robot account repository access using the Red Hat Quay v2 UI. Prerequisites You have created a robot account. You have created multiple repositories under a single organization. Procedure On the Red Hat Quay v2 UI landing page, click Organizations in the navigation pane. On the Organizations page, select the name of the organization that has multiple repositories. The number of repositories under a single organization can be found under the Repo Count column. On your organization's page, click Robot accounts . For the robot account that will be added to multiple repositories, click the kebab icon Set repository permissions . On the Set repository permissions page, check the boxes of the repositories that the robot account will be added to. For example: Set the permissions for the robot account, for example, None , Read , Write , Admin . Click save . An alert that says Success alert: Successfully updated repository permission appears on the Set repository permissions page, confirming the changes. Return to the Organizations Robot accounts page. Now, the Repositories column of your robot account shows the number of repositories that the robot account has been added to. 20.1.9. Creating default permissions using the Red Hat Quay v2 UI Default permissions defines permissions that should be granted automatically to a repository when it is created, in addition to the default of the repository's creator. Permissions are assigned based on the user who created the repository. Use the following procedure to create default permissions using the Red Hat Quay v2 UI. Procedure Click the name of an organization. Click Default permissions . Click create default permissions . A toggle drawer appears. Select either Anyone or Specific user to create a default permission when a repository is created. If selecting Anyone , the following information must be provided: Applied to . Search, invite, or add a user/robot/team. Permission . Set the permission to one of Read , Write , or Admin . If selecting Specific user , the following information must be provided: Repository creator . Provide either a user or robot account. Applied to . Provide a username, robot account, or team name. Permission . Set the permission to one of Read , Write , or Admin . Click Create default permission . A confirmation box appears, returning the following alert: Successfully created default permission for creator . 20.1.10. Organization settings for the v2 UI Use the following procedure to alter your organization settings using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Settings tab. Optional. Enter the email address associated with the organization. Optional. Set the allotted time for the Time Machine feature to one of the following: 1 week 1 month 1 year Never Click Save . 20.1.11. Viewing image tag information using the v2 UI Use the following procedure to view image tag information using the v2 UI. Procedure On the v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the name of the tag, for example, test . You are taken to the Details page of the tag. The page reveals the following information: Name Repository Digest Vulnerabilities Creation Modified Size Labels How to fetch the image tag Optional. Click Security Report to view the tag's vulnerabilities. You can expand an advisory column to open up CVE data. Optional. Click Packages to view the tag's packages. Click the name of the repository, for example, busybox , to return to the Tags page. Optional. Hover over the Pull icon to reveal the ways to fetch the tag. Check the box of the tag, or multiple tags, click the Actions drop down menu, and then Delete to delete the tag. Confirm deletion by clicking Delete in the popup box. 20.1.12. Adjusting repository settings using the v2 UI Use the following procedure to adjust various settings for a repository using the v2 UI. Procedure On the v2 UI, click Repositories . Click the name of a repository, for example, quayadmin/busybox . Click the Settings tab. Optional. Click User and robot permissions . You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions . You can change the settings to Read , Write , or Admin . Optional. Click Events and notifications . You can create an event and notification by clicking Create Notification . The following event options are available: Push to Repository Package Vulnerability Found Image build failed Image build queued Image build started Image build success Image build cancelled Then, issue a notification. The following options are available: Email Notification Flowdock Team Notification HipChat Room Notification Slack Notification Webhook POST After selecting an event option and the method of notification, include a Room ID # , a Room Notification Token , then, click Submit . Optional. Click Repository visibility . You can make the repository private, or public, by clicking Make Public . Optional. Click Delete repository . You can delete the repository by clicking Delete Repository . 20.2. Viewing Red Hat Quay tag history Use the following procedure to view tag history on the Red Hat Quay v2 UI. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click Tag History . On this page, you can perform the following actions: Search by tag name Select a date range View tag changes View tag modification dates and the time at which they were changed 20.3. Adding and managing labels on the Red Hat Quay v2 UI Red Hat Quay administrators can add and manage labels for tags by using the following procedure. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Edit labels . In the Edit labels window, click Add new label . Enter a label for the image tag using the key=value format, for example, com.example.release-date=2023-11-14 . Note The following error is returned when failing to use the key=value format: Invalid label format, must be key value separated by = . Click the whitespace of the box to add the label. Optional. Add a second label. Click Save labels to save the label to the image tag. The following notification is returned: Created labels successfully . Optional. Click the same image tag's menu kebab Edit labels X on the label to remove it; alternatively, you can edit the text. Click Save labels . The label is now removed or edited. 20.4. Setting tag expirations on the Red Hat Quay v2 UI Red Hat Quay administrators can set expiration dates for certain tags in a repository. This helps automate the cleanup of older or unused tags, helping to reduce storage space. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Change expiration . Optional. Alternatively, you can bulk add expiration dates by clicking the box of multiple tags, and then select Actions Set expiration . In the Change Tags Expiration window, set an expiration date, specifying the day of the week, month, day of the month, and year. For example, Wednesday, November 15, 2023 . Alternatively, you can click the calendar button and manually select the date. Set the time, for example, 2:30 PM . Click Change Expiration to confirm the date and time. The following notification is returned: Successfully set expiration for tag test to Nov 15, 2023, 2:26 PM . On the Red Hat Quay v2 UI Tags page, you can see when the tag is set to expire. For example: 20.5. Enabling the legacy UI In the navigation pane, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to Current UI .
[ "podman pull busybox", "podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test", "podman push quay-server.example.com/quayadmin/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/using-v2-ui
Chapter 6. Securing network services
Chapter 6. Securing network services Red Hat Enterprise Linux 9 supports many different types of network servers. Their network services can expose the system security to risks of various types of attacks, such as denial of service attacks (DoS), distributed denial of service attacks (DDoS), script vulnerability attacks, and buffer overflow attacks. To increase the system security against attacks, it is important to monitor active network services that you use. For example, when a network service is running on a machine, its daemon listens for connections on network ports, and this can reduce the security. To limit exposure to attacks over the network, all services that are unused should be turned off. 6.1. Securing the rpcbind service The rpcbind service is a dynamic port-assignment daemon for remote procedure calls (RPC) services such as Network Information Service (NIS) and Network File System (NFS). Because it has weak authentication mechanisms and can assign a wide range of ports for the services it controls, it is important to secure rpcbind . You can secure rpcbind by restricting access to all networks and defining specific exceptions using firewall rules on the server. Note The rpcbind service is required on NFSv3 servers. The rpcbind service is not required on NFSv4 . Prerequisites The rpcbind package is installed. The firewalld package is installed and the service is running. Procedure Add firewall rules, for example: Limit TCP connection and accept packages only from the 192.168.0.0/24 host via the 111 port: Limit TCP connection and accept packages only from local host via the 111 port: Limit UDP connection and accept packages only from the 192.168.0.0/24 host via the 111 port: To make the firewall settings permanent, use the --permanent option when adding firewall rules. Reload the firewall to apply the new rules: Verification List the firewall rules: Additional resources For more information about NFSv4-only servers, see Configuring an NFSv4-only server Using and configuring firewalld 6.2. Securing the rpc.mountd service The rpc.mountd daemon implements the server side of the NFS mount protocol. The NFS mount protocol is used by NFS version 3 (RFC 1813). You can secure the rpc.mountd service by adding firewall rules to the server. You can restrict access to all networks and define specific exceptions using firewall rules. Prerequisites The rpc.mountd package is installed. The firewalld package is installed and the service is running. Procedure Add firewall rules to the server, for example: Accept mountd connections from the 192.168.0.0/24 host: Accept mountd connections from the local host: To make the firewall settings permanent, use the --permanent option when adding firewall rules. Reload the firewall to apply the new rules: Verification List the firewall rules: Additional resources Using and configuring firewalld 6.3. Securing the NFS service You can secure Network File System version 4 (NFSv4) by authenticating and encrypting all file system operations using Kerberos. When using NFSv4 with Network Address Translation (NAT) or a firewall, you can turn off the delegations by modifying the /etc/default/nfs file. Delegation is a technique by which the server delegates the management of a file to a client. In contrast, NFSv3 do not use Kerberos for locking and mounting files. The NFS service sends the traffic using TCP in all versions of NFS. The service supports Kerberos user and group authentication, as part of the RPCSEC_GSS kernel module. NFS allows remote hosts to mount file systems over a network and interact with those file systems as if they are mounted locally. You can merge the resources on centralized servers and additionally customize NFS mount options in the /etc/nfsmount.conf file when sharing the file systems. 6.3.1. Export options for securing an NFS server The NFS server determines a list structure of directories and hosts about which file systems to export to which hosts in the /etc/exports file. You can use the following export options on the /etc/exports file: ro Exports the NFS volume as read-only. rw Allows read and write requests on the NFS volume. Use this option cautiously because allowing write access increases the risk of attacks. If your scenario requires mounting the directories with the rw option, make sure they are not writable for all users to reduce possible risks. root_squash Maps requests from uid / gid 0 to the anonymous uid / gid . This does not apply to any other uids or gids that might be equally sensitive, such as the bin user or the staff group. no_root_squash Turns off root squashing. By default, NFS shares change the root user to the nobody user, which is an unprivileged user account. This changes the owner of all the root created files to nobody , which prevents the uploading of programs with the setuid bit set. When using the no_root_squash option, remote root users can change any file on the shared file system and leave applications infected by trojans for other users. secure Restricts exports to reserved ports. By default, the server allows client communication only through reserved ports. However, it is easy for anyone to become a root user on a client on many networks, so it is rarely safe for the server to assume that communication through a reserved port is privileged. Therefore the restriction to reserved ports is of limited value; it is better to rely on Kerberos, firewalls, and restriction of exports to particular clients. Warning Extra spaces in the syntax of the /etc/exports file can lead to major changes in the configuration. In the following example, the /tmp/nfs/ directory is shared with the bob.example.com host and has read and write permissions. The following example is the same as the one but shares the same directory to the bob.example.com host with read-only permissions and shares it to the world with read and write permissions due to a single space character after the hostname. You can check the shared directories on your system by entering the showmount -e <hostname> command. Additionally, consider the following best practices when exporting an NFS server: Exporting home directories is a risk because some applications store passwords in plain text or in a weakly encrypted format. You can reduce the risk by reviewing and improving the application code. Some users do not set passwords on SSH keys which again leads to risks with home directories. You can reduce these risks by enforcing the use of passwords or using Kerberos. Restrict the NFS exports only to required clients. Use the showmount -e command on the NFS server to review what the server is exporting. Do not export anything that is not specifically required. Do not allow unnecessary users to log in to a server to reduce the risk of attacks. You can periodically check who and what can access the server. Warning Export an entire file system because exporting a subdirectory of a file system is not secure. An attacker might access the unexported part of a partially-exported file system. Additional resources Using automount in IdM when using RHEL Identity Management exports(5) and nfs(5) man pages on your system 6.3.2. Mount options for securing an NFS client You can pass the following options to the mount command to increase the security of NFS-based clients: nosuid Use the nosuid option to disable the set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program and you can use this option opposite to setuid option. noexec Use the noexec option to disable all executable files on the client. Use this to prevent users from accidentally executing files placed in the shared file system. nodev Use the nodev option to prevent the client's processing of device files as a hardware device. resvport Use the resvport option to restrict communication to a reserved port and you can use a privileged source port to communicate with the server. The reserved ports are reserved for privileged users and processes such as the root user. sec Use the sec option on the NFS server to choose the RPCGSS security flavor for accessing files on the mount point. Valid security flavors are none , sys , krb5 , krb5i , and krb5p . Important The MIT Kerberos libraries provided by the krb5-libs package do not support the Data Encryption Standard (DES) algorithm in new deployments. DES is deprecated and disabled by default in Kerberos libraries because of security and compatibility reasons. Use newer and more secure algorithms instead of DES, unless your environment requires DES for compatibility reasons. Additional resources Frequently-used NFS mount options 6.3.3. Securing NFS with firewall To secure the firewall on an NFS server, keep only the required ports open. Do not use the NFS connection port numbers for any other service. Prerequisites The nfs-utils package is installed. The firewalld package is installed and running. Procedure On NFSv4, the firewall must open TCP port 2049 . On NFSv3, open four additional ports with 2049 : rpcbind service assigns the NFS ports dynamically, which might cause problems when creating firewall rules. To simplify this process, use the /etc/nfs.conf file to specify which ports to use: Set TCP and UDP port for mountd ( rpc.mountd ) in the [mountd] section in port= <value> format. Set TCP and UDP port for statd ( rpc.statd ) in the [statd] section in port= <value> format. Set the TCP and UDP port for the NFS lock manager ( nlockmgr ) in the /etc/nfs.conf file: Set TCP port for nlockmgr ( rpc.statd ) in the [lockd] section in port=value format. Alternatively, you can use the nlm_tcpport option in the /etc/modprobe.d/lockd.conf file. Set UDP port for nlockmgr ( rpc.statd ) in the [lockd] section in udp-port=value format. Alternatively, you can use the nlm_udpport option in the /etc/modprobe.d/lockd.conf file. Verification List the active ports and RPC programs on the NFS server: 6.4. Securing the FTP service You can use the File Transfer Protocol (FTP) to transfer files over a network. Because all FTP transactions with the server, including user authentication, are unencrypted, make sure it is configured securely. RHEL 9 provides two FTP servers: Red Hat Content Accelerator ( tux ) A kernel-space web server with FTP capabilities. Very Secure FTP Daemon ( vsftpd ) A standalone, security-oriented implementation of the FTP service. The following security guidelines are for setting up the vsftpd FTP service. 6.4.1. Securing the FTP greeting banner When a user connects to the FTP service, FTP shows a greeting banner, which by default includes version information. Attackers might use this information to identify weaknesses in the system. You can hide this information by changing the default banner. You can define a custom banner by editing the /etc/banners/ftp.msg file to either directly include a single-line message, or to refer to a separate file, which can contain a multi-line message. Procedure To define a single line message, add the following option to the /etc/vsftpd/vsftpd.conf file: To define a message in a separate file: Create a .msg file which contains the banner message, for example /etc/banners/ ftp .msg : To simplify the management of multiple banners, place all banners into the /etc/banners/ directory. Add the path to the banner file to the banner_file option in the /etc/vsftpd/vsftpd.conf file: Verification Display the modified banner: 6.4.2. Preventing anonymous access and uploads in FTP By default, installing the vsftpd package creates the /var/ftp/ directory and a directory tree for anonymous users with read-only permissions on the directories. Because anonymous users can access the data, do not store sensitive data in these directories. To increase the security of the system, you can configure the FTP server to allow anonymous users to upload files to a specific directory and prevent anonymous users from reading data. In the following procedure, the anonymous user must be able to upload files in the directory owned by the root user but not change it. Procedure Create a write-only directory in the /var/ftp/pub/ directory: Add the following lines to the /etc/vsftpd/vsftpd.conf file: Optional: If your system has SELinux enabled and enforcing, enable SELinux boolean attributes allow_ftpd_anon_write and allow_ftpd_full_access . Warning Allowing anonymous users to read and write in directories might lead to the server becoming a repository for stolen software. 6.4.3. Securing user accounts for FTP FTP transmits usernames and passwords unencrypted over insecure networks for authentication. You can improve the security of FTP by denying system users access to the server from their user accounts. Perform as many of the following steps as applicable for your configuration. Procedure Disable all user accounts in the vsftpd server, by adding the following line to the /etc/vsftpd/vsftpd.conf file: Disable FTP access for specific accounts or specific groups of accounts, such as the root user and users with sudo privileges, by adding the usernames to the /etc/pam.d/vsftpd PAM configuration file. Disable user accounts, by adding the usernames to the /etc/vsftpd/ftpusers file. 6.4.4. Additional resources ftpd_selinux(8) man page on your system 6.5. Securing HTTP servers 6.5.1. Security enhancements in httpd.conf You can enhance the security of the Apache HTTP server by configuring security options in the /etc/httpd/conf/httpd.conf file. Always verify that all scripts running on the system work correctly before putting them into production. Ensure that only the root user has write permissions to any directory containing scripts or Common Gateway Interfaces (CGI). To change the directory ownership to root with write permissions, enter the following commands: In the /etc/httpd/conf/httpd.conf file, you can configure the following options: FollowSymLinks This directive is enabled by default and follows symbolic links in the directory. Indexes This directive is enabled by default. Disable this directive to prevent visitors from browsing files on the server. UserDir This directive is disabled by default because it can confirm the presence of a user account on the system. To activate user directory browsing for all user directories other than /root/ , use the UserDir enabled and UserDir disabled root directives. To add users to the list of disabled accounts, add a space-delimited list of users on the UserDir disabled line. ServerTokens This directive controls the server response header field which is sent back to clients. You can use the following parameters to customize the information: ServerTokens Full Provides all available information such as web server version number, server operating system details, installed Apache modules, for example: ServerTokens Full-Release Provides all available information with release versions, for example: ServerTokens Prod / ServerTokens ProductOnly Provides the web server name, for example: ServerTokens Major Provides the web server major release version, for example: ServerTokens Minor Provides the web server minor release version, for example: ServerTokens Min / ServerTokens Minimal Provides the web server minimal release version, for example: ServerTokens OS Provides the web server release version and operating system, for example: Use the ServerTokens Prod option to reduce the risk of attackers gaining any valuable information about your system. Important Do not remove the IncludesNoExec directive. By default, the Server Side Includes (SSI) module cannot execute commands. Changing this can allow an attacker to enter commands on the system. Removing httpd modules You can remove the httpd modules to limit the functionality of the HTTP server. To do so, edit configuration files in the /etc/httpd/conf.modules.d/ or /etc/httpd/conf.d/ directory. For example, to remove the proxy module: Additional resources The Apache HTTP server Customizing the SELinux policy for the Apache HTTP server 6.5.2. Securing the Nginx server configuration Nginx is a high-performance HTTP and proxy server. You can harden your Nginx configuration with the following configuration options. Procedure To disable version strings, modify the server_tokens configuration option: This option stops displaying additional details such as server version number. This configuration displays only the server name in all requests served by Nginx, for example: Add extra security headers that mitigate certain known web application vulnerabilities in specific /etc/nginx/ conf files: For example, the X-Frame-Options header option denies any page outside of your domain to frame any content served by Nginx, mitigating clickjacking attacks: For example, the x-content-type header prevents MIME-type sniffing in certain older browsers: For example, the X-XSS-Protection header enables Cross-Site Scripting (XSS) filtering, which prevents browsers from rendering potentially malicious content included in a response by Nginx: You can limit the services exposed to the public and limit what they do and accept from the visitors, for example: The snippet will limit access to all methods except GET and HEAD . You can disable HTTP methods, for example: You can configure SSL to protect the data served by your Nginx web server, consider serving it over HTTPS only. Furthermore, you can generate a secure configuration profile for enabling SSL in your Nginx server using the Mozilla SSL Configuration Generator. The generated configuration ensures that known vulnerable protocols (for example, SSLv2 and SSLv3), ciphers, and hashing algorithms (for example, 3DES and MD5) are disabled. You can also use the SSL Server Test to verify that your configuration meets modern security requirements. Additional resources Mozilla SSL Configuration Generator SSL Server Test 6.6. Securing PostgreSQL by limiting access to authenticated local users PostgreSQL is an object-relational database management system (DBMS). In Red Hat Enterprise Linux, PostgreSQL is provided by the postgresql-server package. You can reduce the risks of attacks by configuring client authentication. The pg_hba.conf configuration file stored in the database cluster's data directory controls the client authentication. Follow the procedure to configure PostgreSQL for host-based authentication. Procedure Install PostgreSQL: Initialize a database storage area using one of the following options: Using the initdb utility: The initdb command with the -D option creates the directory you specify if it does not already exist, for example /home/postgresql/db1/ . This directory then contains all the data stored in the database and also the client authentication configuration file. Using the postgresql-setup script: By default, the script uses the /var/lib/pgsql/data/ directory. This script helps system administrators with basic database cluster administration. To allow any authenticated local users to access any database with their usernames, modify the following line in the pg_hba.conf file: This can be problematic when you use layered applications that create database users and no local users. If you do not want to explicitly control all user names on the system, remove the local line entry from the pg_hba.conf file. Restart the database to apply the changes: The command updates the database and also verifies the syntax of the configuration file. 6.7. Securing the Memcached service Memcached is an open source, high-performance, distributed memory object caching system. It can improve the performance of dynamic web applications by lowering database load. Memcached is an in-memory key-value store for small chunks of arbitrary data, such as strings and objects, from results of database calls, API calls, or page rendering. Memcached allows assigning memory from underutilized areas to applications that require more memory. In 2018, vulnerabilities of DDoS amplification attacks by exploiting Memcached servers exposed to the public internet were discovered. These attacks took advantage of Memcached communication using the UDP protocol for transport. The attack was effective because of the high amplification ratio where a request with the size of a few hundred bytes could generate a response of a few megabytes or even hundreds of megabytes in size. In most situations, the memcached service does not need to be exposed to the public Internet. Such exposure may have its own security problems, allowing remote attackers to leak or modify information stored in Memcached. 6.7.1. Hardening Memcached against DDoS To mitigate security risks, perform as many of the following steps as applicable for your configuration. Procedure Configure a firewall in your LAN. If your Memcached server should be accessible only in your local network, do not route external traffic to ports used by the memcached service. For example, remove the default port 11211 from the list of allowed ports: If you use a single Memcached server on the same machine as your application, set up memcached to listen to localhost traffic only. Modify the OPTIONS value in the /etc/sysconfig/memcached file: Enable Simple Authentication and Security Layer (SASL) authentication: Modify or add the /etc/sasl2/memcached.conf file: Add an account in the SASL database: Ensure that the database is accessible for the memcached user and group: Enable SASL support in Memcached by adding the -S value to the OPTIONS parameter in the /etc/sysconfig/memcached file: Restart the Memcached server to apply the changes: Add the username and password created in the SASL database to the Memcached client configuration of your application. Encrypt communication between Memcached clients and servers with TLS: Enable encrypted communication between Memcached clients and servers with TLS by adding the -Z value to the OPTIONS parameter in the /etc/sysconfig/memcached file: Add the certificate chain file path in the PEM format using the -o ssl_chain_cert option. Add a private key file path using the -o ssl_key option.
[ "firewall-cmd --add-rich-rule='rule family=\"ipv4\" port port=\"111\" protocol=\"tcp\" source address=\"192.168.0.0/24\" invert=\"True\" drop'", "firewall-cmd --add-rich-rule='rule family=\"ipv4\" port port=\"111\" protocol=\"tcp\" source address=\"127.0.0.1\" accept'", "firewall-cmd --permanent --add-rich-rule='rule family=\"ipv4\" port port=\"111\" protocol=\"udp\" source address=\"192.168.0.0/24\" invert=\"True\" drop'", "firewall-cmd --reload", "firewall-cmd --list-rich-rule rule family=\"ipv4\" port port=\"111\" protocol=\"tcp\" source address=\"192.168.0.0/24\" invert=\"True\" drop rule family=\"ipv4\" port port=\"111\" protocol=\"tcp\" source address=\"127.0.0.1\" accept rule family=\"ipv4\" port port=\"111\" protocol=\"udp\" source address=\"192.168.0.0/24\" invert=\"True\" drop", "firewall-cmd --add-rich-rule 'rule family=\"ipv4\" service name=\"mountd\" source address=\"192.168.0.0/24\" invert=\"True\" drop'", "firewall-cmd --permanent --add-rich-rule 'rule family=\"ipv4\" source address=\"127.0.0.1\" service name=\"mountd\" accept'", "firewall-cmd --reload", "firewall-cmd --list-rich-rule rule family=\"ipv4\" service name=\"mountd\" source address=\"192.168.0.0/24\" invert=\"True\" drop rule family=\"ipv4\" source address=\"127.0.0.1\" service name=\"mountd\" accept", "/tmp/nfs/ bob.example.com(rw)", "/tmp/nfs/ bob.example.com (rw)", "rpcinfo -p", "ftpd_banner=Hello, all activity on ftp.example.com is logged.", "######### Hello, all activity on ftp.example.com is logged. #########", "banner_file=/etc/banners/ ftp .msg", "ftp localhost Trying ::1... Connected to localhost (::1). Hello, all activity on ftp.example.com is logged.", "mkdir /var/ftp/pub/ upload chmod 730 /var/ftp/pub/ upload ls -ld /var/ftp/pub/ upload drwx-wx---. 2 root ftp 4096 Nov 14 22:57 /var/ftp/pub/upload", "anon_upload_enable=YES anonymous_enable=YES", "local_enable=NO", "chown root <directory_name> chmod 755 <directory_name>", "Apache/2.4.37 (Red Hat Enterprise Linux) MyMod/1.2", "Apache/2.4.37 (Red Hat Enterprise Linux) (Release 41.module+el8.5.0+11772+c8e0c271)", "Apache", "Apache/2", "Apache/2.4", "Apache/2.4.37", "Apache/2.4.37 (Red Hat Enterprise Linux)", "echo '# All proxy modules disabled' > /etc/httpd/conf.modules.d/00-proxy.conf", "server_tokens off;", "curl -sI http://localhost | grep Server Server: nginx", "add_header X-Frame-Options \"SAMEORIGIN\";", "add_header X-Content-Type-Options nosniff;", "add_header X-XSS-Protection \"1; mode=block\";", "limit_except GET { allow 192.168.1.0/32; deny all; }", "Allow GET, PUT, POST; return \"405 Method Not Allowed\" for all others. if ( USDrequest_method !~ ^(GET|PUT|POST)USD ) { return 405; }", "yum install postgresql-server", "initdb -D /home/postgresql/db1/", "postgresql-setup --initdb", "local all all trust", "systemctl restart postgresql", "firewall-cmd --remove-port=11211/udp firewall-cmd --runtime-to-permanent", "OPTIONS=\"-l 127.0.0.1,::1\"", "sasldb_path: /path.to/memcached.sasldb", "saslpasswd2 -a memcached -c cacheuser -f /path.to/memcached.sasldb", "chown memcached:memcached /path.to/memcached.sasldb", "OPTIONS=\"-S\"", "systemctl restart memcached", "OPTIONS=\"-Z\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/securing_networks/securing-network-services_securing-networks
Chapter 39. Using A JAXBContext Object
Chapter 39. Using A JAXBContext Object Abstract The JAXBContext object allows the Apache CXF's runtime to transform data between XML elements and Java object. Application developers need to instantiate a JAXBContext object they want to use JAXB objects in message handlers and when implementing consumers that work with raw XML messages. Overview The JAXBContext object is a low-level object used by the runtime. It allows the runtime to convert between XML elements and their corresponding Java representations. An application developer generally does not need to work with JAXBContext objects. The marshaling and unmarshaling of XML data is typically handled by the transport and binding layers of a JAX-WS application. However, there are instances when an application will need to manipulate the XML message content directly. In two of these instances: Section 41.1, "Using XML in a Consumer" Chapter 43, Writing Handlers You will need instantiate a JAXBContext object using one of the two available JAXBContext.newInstance() methods. Best practices JAXBContext objects are resource intensive to instantiate. It is recommended that an application create as few instances as possible. One way to do this is to create a single JAXBContext object that can manage all of the JAXB objects used by your application and share it among as many parts of your application as possible. JAXBContext objects are thread safe. Getting a JAXBContext object using an object factory The JAXBContext class provides a newInstance() method, shown in Example 39.1, "Getting a JAXB Context Using Classes" , that takes a list of classes that implement JAXB objects. Example 39.1. Getting a JAXB Context Using Classes static JAXBContext newInstance Class... classesToBeBound JAXBException The returned JAXBObject object will be able to marshal and unmarshal data for the JAXB object implemented by the classes passed into the method. It will also be able to work with any classes that are statically referenced from any of the classes passed into the method. While it is possible to pass the name of every JAXB class used by your application to the newInstance() method it is not efficient. A more efficient way to accomplish the same goal is to pass in the object factory, or object factories, generated for your application. The resulting JAXBContext object will be able to manage any JAXB classes the specified object factories can instantiate. Getting a JAXBContext object using package names The JAXBContext class provides a newInstance() method, shown in Example 39.2, "Getting a JAXB Context Using Classes" , that takes a colon ( : ) seperated list of package names. The specified packages should contain JAXB objects derived from XML Schema. Example 39.2. Getting a JAXB Context Using Classes static JAXBContext newInstance String contextPath JAXBException The returned JAXBContext object will be able to marshal and unmarshal data for all of the JAXB objects implemented by the classes in the specified packages.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsjaxbcontext
Chapter 1. AdministrationEventService
Chapter 1. AdministrationEventService 1.1. ListAdministrationEvents GET /v1/administration/events ListAdministrationEvents returns the list of events after filtered by requested fields. 1.1.1. Description 1.1.2. Parameters 1.1.2.1. Query Parameters Name Description Required Default Pattern pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null filter.from Matches events with last_occurred_at after a specific timestamp, i.e. the lower boundary. - null filter.until Matches events with last_occurred_at before a specific timestamp, i.e. the upper boundary. - null filter.domain Matches events from a specific domain. String - null filter.resourceType Matches events associated with a specific resource type. String - null filter.type Matches events based on their type. String - null filter.level Matches events based on their level. String - null 1.1.3. Return Type V1ListAdministrationEventsResponse 1.1.4. Content Type application/json 1.1.5. Responses Table 1.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListAdministrationEventsResponse 0 An unexpected error response. GooglerpcStatus 1.1.6. Samples 1.1.7. Common object reference 1.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 1.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 1.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 1.1.7.3. V1AdministrationEvent AdministrationEvents are administrative events emitted by Central. They are used to create transparency for users for asynchronous, background tasks. Events are part of Central's system health view. Field Name Required Nullable Type Description Format id String UUID of the event. type V1AdministrationEventType ADMINISTRATION_EVENT_TYPE_UNKNOWN, ADMINISTRATION_EVENT_TYPE_GENERIC, ADMINISTRATION_EVENT_TYPE_LOG_MESSAGE, level V1AdministrationEventLevel ADMINISTRATION_EVENT_LEVEL_UNKNOWN, ADMINISTRATION_EVENT_LEVEL_INFO, ADMINISTRATION_EVENT_LEVEL_SUCCESS, ADMINISTRATION_EVENT_LEVEL_WARNING, ADMINISTRATION_EVENT_LEVEL_ERROR, message String Message associated with the event. The message may include detailed information for this particular event. hint String Hint associated with the event. The hint may include different information based on the type of event. It can include instructions to resolve an event, or informational hints. domain String Domain associated with the event. An event's domain outlines the feature domain where the event was created from. As an example, this might be \"Image Scanning\". In case of events that cannot be tied to a specific domain, this will be \"General\". resource V1AdministrationEventResource numOccurrences String Occurrences associated with the event. When events may occur multiple times, the occurrences track the amount. int64 lastOccurredAt Date Specifies the time when the event has last occurred. date-time createdAt Date Specifies the time when the event has been created. date-time 1.1.7.4. V1AdministrationEventLevel AdministrationEventLevel exposes the different levels of events. Enum Values ADMINISTRATION_EVENT_LEVEL_UNKNOWN ADMINISTRATION_EVENT_LEVEL_INFO ADMINISTRATION_EVENT_LEVEL_SUCCESS ADMINISTRATION_EVENT_LEVEL_WARNING ADMINISTRATION_EVENT_LEVEL_ERROR 1.1.7.5. V1AdministrationEventResource Resource holds all information about the resource associated with the event. Field Name Required Nullable Type Description Format type String Resource type associated with the event. An event may refer to an underlying resource such as a particular image. In that case, the resource type will be filled here. id String Resource ID associated with the event. If an event refers to an underlying resource, the resource ID identifies the underlying resource. The resource ID is not guaranteed to be set, depending on the context of the administration event. name String Resource name associated with the event. If an event refers to an underlying resource, the resource name identifies the underlying resource. The resource name is not guaranteed to be set, depending on the context of the administration event. 1.1.7.6. V1AdministrationEventType AdministrationEventType exposes the different types of events. Enum Values ADMINISTRATION_EVENT_TYPE_UNKNOWN ADMINISTRATION_EVENT_TYPE_GENERIC ADMINISTRATION_EVENT_TYPE_LOG_MESSAGE 1.1.7.7. V1ListAdministrationEventsResponse Field Name Required Nullable Type Description Format events List of V1AdministrationEvent 1.2. GetAdministrationEvent GET /v1/administration/events/{id} GetAdministrationEvent retrieves an event by ID. 1.2.1. Description 1.2.2. Parameters 1.2.2.1. Path Parameters Name Description Required Default Pattern id X null 1.2.3. Return Type V1GetAdministrationEventResponse 1.2.4. Content Type application/json 1.2.5. Responses Table 1.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAdministrationEventResponse 0 An unexpected error response. GooglerpcStatus 1.2.6. Samples 1.2.7. Common object reference 1.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 1.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 1.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 1.2.7.3. V1AdministrationEvent AdministrationEvents are administrative events emitted by Central. They are used to create transparency for users for asynchronous, background tasks. Events are part of Central's system health view. Field Name Required Nullable Type Description Format id String UUID of the event. type V1AdministrationEventType ADMINISTRATION_EVENT_TYPE_UNKNOWN, ADMINISTRATION_EVENT_TYPE_GENERIC, ADMINISTRATION_EVENT_TYPE_LOG_MESSAGE, level V1AdministrationEventLevel ADMINISTRATION_EVENT_LEVEL_UNKNOWN, ADMINISTRATION_EVENT_LEVEL_INFO, ADMINISTRATION_EVENT_LEVEL_SUCCESS, ADMINISTRATION_EVENT_LEVEL_WARNING, ADMINISTRATION_EVENT_LEVEL_ERROR, message String Message associated with the event. The message may include detailed information for this particular event. hint String Hint associated with the event. The hint may include different information based on the type of event. It can include instructions to resolve an event, or informational hints. domain String Domain associated with the event. An event's domain outlines the feature domain where the event was created from. As an example, this might be \"Image Scanning\". In case of events that cannot be tied to a specific domain, this will be \"General\". resource V1AdministrationEventResource numOccurrences String Occurrences associated with the event. When events may occur multiple times, the occurrences track the amount. int64 lastOccurredAt Date Specifies the time when the event has last occurred. date-time createdAt Date Specifies the time when the event has been created. date-time 1.2.7.4. V1AdministrationEventLevel AdministrationEventLevel exposes the different levels of events. Enum Values ADMINISTRATION_EVENT_LEVEL_UNKNOWN ADMINISTRATION_EVENT_LEVEL_INFO ADMINISTRATION_EVENT_LEVEL_SUCCESS ADMINISTRATION_EVENT_LEVEL_WARNING ADMINISTRATION_EVENT_LEVEL_ERROR 1.2.7.5. V1AdministrationEventResource Resource holds all information about the resource associated with the event. Field Name Required Nullable Type Description Format type String Resource type associated with the event. An event may refer to an underlying resource such as a particular image. In that case, the resource type will be filled here. id String Resource ID associated with the event. If an event refers to an underlying resource, the resource ID identifies the underlying resource. The resource ID is not guaranteed to be set, depending on the context of the administration event. name String Resource name associated with the event. If an event refers to an underlying resource, the resource name identifies the underlying resource. The resource name is not guaranteed to be set, depending on the context of the administration event. 1.2.7.6. V1AdministrationEventType AdministrationEventType exposes the different types of events. Enum Values ADMINISTRATION_EVENT_TYPE_UNKNOWN ADMINISTRATION_EVENT_TYPE_GENERIC ADMINISTRATION_EVENT_TYPE_LOG_MESSAGE 1.2.7.7. V1GetAdministrationEventResponse Field Name Required Nullable Type Description Format event V1AdministrationEvent 1.3. CountAdministrationEvents GET /v1/count/administration/events CountAdministrationEvents returns the number of events after filtering by requested fields. 1.3.1. Description 1.3.2. Parameters 1.3.2.1. Query Parameters Name Description Required Default Pattern filter.from Matches events with last_occurred_at after a specific timestamp, i.e. the lower boundary. - null filter.until Matches events with last_occurred_at before a specific timestamp, i.e. the upper boundary. - null filter.domain Matches events from a specific domain. String - null filter.resourceType Matches events associated with a specific resource type. String - null filter.type Matches events based on their type. String - null filter.level Matches events based on their level. String - null 1.3.3. Return Type V1CountAdministrationEventsResponse 1.3.4. Content Type application/json 1.3.5. Responses Table 1.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountAdministrationEventsResponse 0 An unexpected error response. GooglerpcStatus 1.3.6. Samples 1.3.7. Common object reference 1.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 1.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 1.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 1.3.7.3. V1CountAdministrationEventsResponse Field Name Required Nullable Type Description Format count Integer The total number of events after filtering and deduplication. int32
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/administrationeventservice
2.3. Clusters
2.3. Clusters 2.3.1. Introduction to Clusters A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models. Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the cluster and settings on the virtual machines. The cluster is the highest level at which power and load-sharing policies can be defined. The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count , respectively. Clusters run virtual machines or Red Hat Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together. Red Hat Virtualization creates a default cluster in the default data center during installation. Figure 2.2. Cluster 2.3.2. Cluster Tasks Note Some cluster options do not apply to Gluster clusters. For more information about using Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . 2.3.2.1. Creating a New Cluster A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must have the same CPU architecture. To optimize your CPU types, create your hosts before you create your cluster. After creating the cluster, you can configure the hosts using the Guide Me button. Procedure Click Compute Clusters . Click New . Select the Data Center the cluster will belong to from the drop-down list. Enter the Name and Description of the cluster. Select a network from the Management Network drop-down list to assign the management network role. Select the CPU Architecture . For CPU Type , select the oldest CPU processor family among the hosts that will be part of this cluster. The CPU types are listed in order from the oldest to newest. Important A hosts whose CPU processor family is older than the one you specify with CPU Type cannot be part of this cluster. For details, see Which CPU family should a RHEV3 or RHV4 cluster be set to? . Select the FIPS Mode of the cluster from the drop-down list. Select the Compatibility Version of the cluster from the drop-down list. Select the Switch Type from the drop-down list. Select the Firewall Type for hosts in the cluster, either Firewalld (default) or iptables . Note iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld Select either the Enable Virt Service or Enable Gluster Service check box to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes. Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use. The /dev/urandom source (Linux-provided device) is enabled by default. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster. Click the Migration Policy tab to define the virtual machine migration policy for the cluster. Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and select a serial number policy. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster. Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options. Click the MAC Address Pool tab to specify a MAC address pool other than the default pool for the cluster. For more options on creating, editing, or removing MAC address pools, see MAC Address Pools . Click OK to create the cluster and open the Cluster - Guide Me window. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the cluster and clicking More Actions ( ), then clicking Guide Me . 2.3.2.2. General Cluster Settings Explained The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK , prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values. Table 2.4. General Cluster Settings Field Description/Action Data Center The data center that will contain the cluster. The data center must be created before adding a cluster. Name The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Description / Comment The description of the cluster or additional notes. These fields are recommended but not mandatory. Management Network The logical network that will be assigned the management network role. The default is ovirtmgmt . This network will also be used for migrating virtual machines if the migration network is not properly attached to the source or the destination hosts. On existing clusters, the management network can only be changed using the Manage Networks button in the Logical Networks tab in the details view. CPU Architecture The CPU architecture of the cluster. All hosts in a cluster must run the architecture you specify. Different CPU types are available depending on which CPU architecture is selected. undefined : All other CPU types. x86_64 : For Intel and AMD CPU types. ppc64 : For IBM POWER CPU types. CPU Type The oldest CPU family in the cluster. For a list of CPU types, see CPU Requirements in the Planning and Prerequisites Guide . You cannot change this after creating the cluster without significant disruption. Set CPU type to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. Chipset/Firmware Type This setting is only available if the CPU Architecture of the cluster is set to x86_64 . This setting specifies the chipset and firmware type. Options are: Auto Detect : This setting automatically detects the chipset and firmware type. When Auto Detect is selected, the chipset and firmware are determined by the first host up in the cluster. I440FX Chipset with BIOS : Specifies the chipset to I440FX with a firmware type of BIOS. Q35 Chipset with BIOS : Specifies the Q35 chipset with a firmware type of BIOS without UEFI (Default for clusters with compatibility version 4.4). Q35 Chipset with UEFI Specifies the Q35 chipset with a firmware type of BIOS with UEFI. (Default for clusters with compatibility version 4.7) Q35 Chipset with UEFI SecureBoot Specifies the Q35 chipset with a firmware type of UEFI with SecureBoot, which authenticates the digital signatures of the boot loader. For more information, see UEFI and the Q35 chipset in the Administration Guide . Change Existing VMs/Templates from 1440fx to Q35 Chipset with Bios Select this check box to change existing workloads when the cluster's chipset changes from I440FX to Q35. FIPS Mode The FIPS mode used by the cluster. All hosts in the cluster must run the FIPS mode you specify or they will become non-operational. Auto Detect : This setting automatically detects whether FIPS mode is enabled or disabled. When Auto Detect is selected, the FIPS mode is determined by the first host up in the cluster. Disabled : This setting disables FIPS on the cluster. Enabled : This setting enables FIPS on the cluster. Compatibility Version The version of Red Hat Virtualization. You will not be able to select a version earlier than the version specified for the data center. Switch Type The type of switch used by the cluster. Linux Bridge is the standard Red Hat Virtualization switch. OVS provides support for Open vSwitch networking features. Firewall Type Specifies the firewall type for hosts in the cluster, either firewalld (default) or iptables . iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld . If you change an existing cluster's firewall type, you must reinstall all hosts in the cluster to apply the change. Default Network Provider Specifies the default external network provider that the cluster will use. If you select Open Virtual Network (OVN), the hosts added to the cluster are automatically configured to communicate with the OVN provider. If you change the default network provider, you must reinstall all hosts in the cluster to apply the change. Maximum Log Memory Threshold Specifies the logging threshold for maximum memory consumption as a percentage or as an absolute value in MB. A message is logged if a host's memory usage exceeds the percentage value or if a host's available memory falls below the absolute value in MB. The default is 95% . Enable Virt Service If this check box is selected, hosts in this cluster will be used to run virtual machines. Enable Gluster Service If this check box is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. Import existing gluster configuration This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Virtualization Manager. The following options are required for each host in the cluster that is being imported: Hostname : Enter the IP or fully qualified domain name of the Gluster host server. Host ssh public key (PEM) : Red Hat Virtualization Manager fetches the host's SSH public key, to ensure you are connecting with the correct host. Password : Enter the root password required for communicating with the host. Additional Random Number Generator source If the check box is selected, all hosts in the cluster have the additional random number generator device available. This enables passthrough of entropy from the random number generator device to virtual machines. Gluster Tuned Profile This check box is only available if the Enable Gluster Service check box is selected. This option specifies the virtual-host tuning profile to enable more aggressive writeback of dirty memory pages, which benefits the host performance. 2.3.2.3. Optimization Settings Explained Memory Considerations Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine. CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows. Table 2.5. Optimization Settings Field Description/Action Memory Optimization None - Disable memory overcommit : Disables memory page sharing. For Server Load - Allow scheduling of 150% of physical memory : Sets the memory page sharing threshold to 150% of the system memory on each host. For Desktop Load - Allow scheduling of 200% of physical memory : Sets the memory page sharing threshold to 200% of the system memory on each host. CPU Threads Selecting the Count Threads As Cores check box enables hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). When this check box is selected, the exposed host threads are treated as cores that virtual machines can use. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores. Memory Balloon Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the Memory Overcommit Manager (MoM) starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up . If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster . It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution. KSM control Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. 2.3.2.4. Migration Policy Settings Explained A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized. Table 2.6. Migration Policies Explained Policy Description Cluster default (Minimal downtime) Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Post-copy migration When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running virtual machine. Do not use post-copy migration if the virtual machine availability is critical or if the migration network is unstable. Suspend workload if needed A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Because of this, virtual machines may experience a more significant downtime than with some of the other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host. Table 2.7. Bandwidth Explained Policy Description Auto Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS . If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host. Hypervisor default Bandwidth is controlled by local VDSM setting on sending Host. Custom Defined by user (in Mbps). This value is divided by the number of concurrent migrations (default is 2, to account for ingoing and outgoing migration). Therefore, the user-defined bandwidth must be large enough to accommodate all concurrent migrations. For example, if the Custom bandwidth is defined as 600 Mbps, a virtual machine migration's maximum bandwidth is actually 300 Mbps. The resilience policy defines how the virtual machines are prioritized in the migration. Table 2.8. Resilience Policy Settings Field Description/Action Migrate Virtual Machines Migrates all virtual machines in order of their defined priority. Migrate only Highly Available Virtual Machines Migrates only highly available virtual machines to prevent overloading other hosts. Do Not Migrate Virtual Machines Prevents virtual machines from being migrated. Table 2.9. Additional Properties Settings Field Description/Action Enable Migration Encryption Allows the virtual machine to be encrypted during migration. Cluster default Encrypt Don't encrypt Parallel Migrations Allows you to specify whether and how many parallel migration connections to use. Disabled : The virtual machine is migrated using a single, non-parallel connection. Auto : The number of parallel connections is automatically determined. This settings might automatically disable parallel connections. Auto Parallel : The number of parallel connections is automatically determined. Custom : Allows you to specify the preferred number of parallel Connections, the actual number may be lower. Number of VM Migration Connections This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. 2.3.2.5. Scheduling Policy Settings Explained Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information. Table 2.10. Scheduling Policy Tab Properties Field Description/Action Select Policy Select a policy from the drop-down list. none : Disables load-balancing or power-sharing between hosts for already-running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . evenly_distributed : Distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , VCpuToPhysicalCpuRatio , or MaxFreeMemoryForOverUtilized . cluster_maintenance : Limits activity in a cluster during maintenance tasks. No new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. power_saving : Distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. vm_evenly_distributed : Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Properties The following properties appear depending on the selected policy. Edit them if necessary: HighVmCount : Sets the minimum number of virtual machines that must be running per host to enable load balancing. The default value is 10 running virtual machines on one host. Load balancing is only enabled when there is at least one host in the cluster that has at least HighVmCount running virtual machines. MigrationThreshold : Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5 . SpmVmGrace : Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines the SPM host can run in comparison to other hosts. The default value is 5 . CpuOverCommitDurationMinutes : Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action. The defined time interval protects against temporary spikes in CPU load activating scheduling policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2 . HighUtilization : Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80 . LowUtilization : Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the Red Hat Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20 . ScaleDown : Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none . HostsInReserve : Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy. EnableAutomaticHostPowerManagement : Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true . MaxFreeMemoryForOverUtilized : Specifies the minimum amount of free memory a host should have, in MB. If a host has less free memory than this amount, the RHV Manager considers the host overutilized. For example, if you set this property to 1000 , a host that has less than 1 GB of free memory is overutilized. For details on how this property interacts with the power_saving and evenly_distributed policies, see MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties . You can add this property to the power_saving and evenly_distributed policies. Although it appears among the list of properties for the vm_evenly_distributed policy, it does not apply to that policy. MinFreeMemoryForUnderUtilized : Specifies the maximum amount of free memory a host should have, in MB. If a host has more free memory than this amount, the RHV Manager scheduler considers the host underutilized. For example, if you set this parameter to 10000 , a host that has more than 10 GB of free memory is underutilized. For details on how this property interacts with the power_saving and evenly_distributed policies, see MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties . You can add this property to the power_saving and evenly_distributed policies. Although it appears among the list of properties for the vm_evenly_distributed policy, it does not apply to that policy. HeSparesCount : Sets the number of additional self-hosted engine nodes that must reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. Other virtual machines are prevented from starting on a self-hosted engine node if doing so would not leave enough free memory for the Manager virtual machine. This is an optional property that can be added to the power_saving , vm_evenly_distributed , and evenly_distributed policies. The default value is 0 . Scheduler Optimization Optimize scheduling for host weighing/ordering. Optimize for Utilization : Includes weight modules in scheduling to allow best selection. Optimize for Speed : Skips host weighting in cases where there are more than ten pending requests. Enable Trusted Service Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details. IMPORTANT : OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available. Enable HA Reservation Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly. Serial Number Policy Configure the policy for assigning serial numbers to each new virtual machine in the cluster: System Default : Use the system-wide defaults in the Manager database. To configure these defaults, use the engine configuration tool to set the values of the DefaultSerialNumberPolicy and DefaultCustomSerialNumber . These key-value pairs are saved in the vdc_options table of the Manager database. For DefaultSerialNumberPolicy : Default value: HOST_ID Possible values: HOST_ID , VM_ID , CUSTOM Command line example: engine-config --set DefaultSerialNumberPolicy=VM_ID Important: Restart the Manager to apply the configuration. For DefaultCustomSerialNumber : Default value: Dummy serial number Possible values: Any string (max length 255 characters) Command line example: engine-config --set DefaultCustomSerialNumber="My very special string value" Important: Restart the Manager to apply the configuration. Host ID : Set each new virtual machine's serial number to the UUID of the host. Vm ID : Set each new virtual machine's serial number to the UUID of the virtual machine. Custom serial number : Set each new virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Custom Serial Number Specify the custom serial number to apply to new virtual machines in the cluster. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log . /var/log/vdsm/mom.log is the Memory Overcommit Manager log file. 2.3.2.6. MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties The scheduler has a background process that migrates virtual machines according to the current cluster scheduling policy and its parameters. Based on the various criteria and their relative weights in a policy, the scheduler continuously categorizes hosts as source hosts or destination hosts and migrates individual virtual machines from the former to the latter. The following description explains how the evenly_distributed and power_saving cluster scheduling policies interact with the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. Although both policies consider CPU and memory load, CPU load is not relevant for the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the evenly_distributed policy: Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts. Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become destination hosts. If MaxFreeMemoryForOverUtilized is not defined, the scheduler does not migrate virtual machines based on the memory load. (It continues migrating virtual machines based on the policy's other criteria, such as CPU load.) If MinFreeMemoryForUnderUtilized is not defined, the scheduler considers all hosts eligible to become destination hosts. If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the power_saving policy: Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts. Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become source hosts. Hosts that have more free memory than MaxFreeMemoryForOverUtilized are not overutilized and become destination hosts. Hosts that have less free memory than MinFreeMemoryForUnderUtilized are not underutilized and become destination hosts. The scheduler prefers migrating virtual machines to hosts that are neither overutilized nor underutilized. If there are not enough of these hosts, the scheduler can migrate virtual machines to underutilized hosts. If the underutilized hosts are not needed for this purpose, the scheduler can power them down. If MaxFreeMemoryForOverUtilized is not defined, no hosts are overutilized. Therefore, only underutilized hosts are source hosts, and destination hosts include all hosts in the cluster. If MinFreeMemoryForUnderUtilized is not defined, only overutilized hosts are source hosts, and hosts that are not overutilized are destination hosts. To prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine. If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered. In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio . Additional resources Cluster Scheduling Policy Settings 2.3.2.7. Cluster Console Settings Explained The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows. Table 2.11. Console Settings Field Description/Action Define SPICE Proxy for Cluster Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hypervisors reside. Overridden SPICE proxy address The proxy by which the SPICE client connects to virtual machines. The address must be in the following format: 2.3.2.8. Fencing Policy Settings Explained The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows. Table 2.12. Fencing Policy Settings Field Description/Action Enable fencing Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere. Skip fencing if host has live lease on storage If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced. Skip fencing on cluster connectivity issues If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold . The Threshold value is selected from the drop-down list; available values are 25 , 50 , 75 , and 100 . Skip fencing if gluster bricks are up This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and can be reached from other peers. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. Skip fencing if gluster quorum not met This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and shutting down the host will cause loss of quorum. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. 2.3.2.9. Setting Load and Power Management Policies for Hosts in a Cluster The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Cluster Scheduling Policy Settings . Procedure Click Compute Clusters and select a cluster. Click Edit . Click the Scheduling Policy tab. Select one of the following policies: none vm_evenly_distributed Set the minimum number of virtual machines that must be running on at least one host to enable load balancing in the HighVmCount field. Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field. Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. evenly_distributed Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. Optionally, to prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine. If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered. In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio . power_saving Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information. Choose one of the following as the Scheduler Optimization for the cluster: Select Optimize for Utilization to include weight modules in scheduling to allow best selection. Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box. OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available. Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines. Optionally select a Serial Number Policy for the virtual machines in the cluster: System Default : Use the system-wide defaults, which are configured in the Manager database using the engine configuration tool and the DefaultSerialNumberPolicy and DefaultCustomSerialNumber key names. The default value for DefaultSerialNumberPolicy is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. Host ID : Set each virtual machine's serial number to the UUID of the host. Vm ID : Set each virtual machine's serial number to the UUID of the virtual machine. Custom serial number : Set each virtual machine's serial number to the value you specify in the following Custom Serial Number parameter. Click OK . 2.3.2.10. Updating the MoM Policy on Hosts in a Cluster The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions for a cluster pass to hosts the time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up . The following procedure must be performed on each host individually. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the Hosts tab and select the host that requires an updated MoM policy. Click Sync MoM Policy . The MoM policy on the host is updated without having to move the host to maintenance mode and back Up . 2.3.2.11. Creating a CPU Profile CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the CPU Profiles tab. Click New . Enter a Name and a Description for the CPU profile. Select the quality of service to apply to the CPU profile from the QoS list. Click OK . 2.3.2.12. Removing a CPU Profile Remove an existing CPU profile from your Red Hat Virtualization environment. Procedure Click Compute Clusters . Click the cluster's name. This opens the details view. Click the CPU Profiles tab and select the CPU profile to remove. Click Remove . Click OK . If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile. 2.3.2.13. Importing an Existing Red Hat Gluster Storage Cluster You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Virtualization Manager. When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. Procedure Click Compute Clusters . Click New . Select the Data Center the cluster will belong to. Enter the Name and Description of the cluster. Select the Enable Gluster Service check box and the Import existing gluster configuration check box. The Import existing gluster configuration field is only displayed if the Enable Gluster Service is selected. In the Hostname field, enter the host name or IP address of any server in the cluster. The host SSH Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field. Enter the Password for the server, and click OK . The Add Hosts window opens, and a list of hosts that are a part of the cluster displays. For each host, enter the Name and the Root Password . If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field. Click Apply to set the entered password all hosts. Verify that the fingerprints are valid and submit your changes by clicking OK . The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Virtualization Manager. 2.3.2.14. Explanation of Settings in the Add Hosts Window The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details. Table 2.13. Add Gluster Hosts Settings Field Description Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts. Name Enter the name of the host. Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window. Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster. Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window. 2.3.2.15. Removing a Cluster Move all hosts out of a cluster before removing it. Note You cannot remove the Default cluster, as it holds the Blank template. You can, however, rename the Default cluster and add it to a new data center. Procedure Click Compute Clusters and select a cluster. Ensure there are no hosts in the cluster. Click Remove . Click OK 2.3.2.16. Memory Optimization To increase the number of virtual machines on a host, you can use memory overcommitment , in which the memory you assign to virtual machines exceeds RAM and relies on swap space. However, there are potential problems with memory overcommitment: Swapping performance - Swap space is slower and consumes more CPU resources than RAM, impacting virtual machine performance. Excessive swapping can lead to CPU thrashing. Out-of-memory (OOM) killer - If the host runs out of swap space, new processes cannot start, and the kernel's OOM killer daemon begins shutting down active processes such as virtual machine guests. To help overcome these shortcomings, you can do the following: Limit memory overcommitment using the Memory Optimization setting and the Memory Overcommit Manager (MoM) . Make the swap space large enough to accommodate the maximum potential demand for virtual memory and have a safety margin remaining. Reduce virtual memory size by enabling memory ballooning and Kernel Same-page Merging (KSM) . 2.3.2.17. Memory Optimization and Memory Overcommitment You can limit the amount of memory overcommitment by selecting one of the Memory Optimization settings: None (0%), 150% , or 200% . Each setting represents a percentage of RAM. For example, with a host that has 64 GB RAM, selecting 150% means you can overcommit memory by an additional 32 GB, for a total of 96 GB in virtual memory. If the host uses 4 GB of that total, the remaining 92 GB are available. You can assign most of that to the virtual machines ( Memory Size on the System tab), but consider leaving some of it unassigned as a safety margin. Sudden spikes in demand for virtual memory can impact performance before the MoM, memory ballooning, and KSM have time to re-optimize virtual memory. To reduce that impact, select a limit that is appropriate for the kinds of applications and workloads you are running: For workloads that produce more incremental growth in demand for memory, select a higher percentage, such as 200% or 150% . For more critical applications or workloads that produce more sudden increases in demand for memory, select a lower percentage, such as 150% or None (0%). Selecting None helps prevent memory overcommitment but allows the MoM, memory balloon devices, and KSM to continue optimizing virtual memory. Important Always test your Memory Optimization settings by stress testing under a wide range of conditions before deploying the configuration to production. To configure the Memory Optimization setting, click the Optimization tab in the New Cluster or Edit Cluster windows. See Cluster Optimization Settings Explained . Additional comments: The Host Statistics views display useful historical information for sizing the overcommitment ratio. The actual memory available cannot be determined in real time because the amount of memory optimization achieved by KSM and memory ballooning changes continuously. When virtual machines reach the virtual memory limit, new apps cannot start. When you plan the number of virtual machines to run on a host, use the maximum virtual memory (physical memory size and the Memory Optimization setting) as a starting point. Do not factor in the smaller virtual memory achieved by memory optimizations such as memory ballooning and KSM. 2.3.2.18. Swap Space and Memory Overcommitment Red Hat provides these recommendations for configuring swap space . When applying these recommendations, follow the guidance to size the swap space as "last effort memory" for a worst-case scenario. Use the physical memory size and Memory Optimization setting as a basis for estimating the total virtual memory size. Exclude any reduction of the virtual memory size from optimization by the MoM, memory ballooning, and KSM. Important To help prevent an OOM condition, make the swap space large enough to handle a worst-case scenario and still have a safety margin available. Always stress-test your configuration under a wide range of conditions before deploying it to production. 2.3.2.19. The Memory Overcommit Manager (MoM) The Memory Overcommit Manager (MoM) does two things: It limits memory overcommitment by applying the Memory Optimization setting to the hosts in a cluster, as described in the preceding section. It optimizes memory by managing the memory ballooning and KSM , as described in the following sections. You do not need to enable or disable MoM. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log , the Memory Overcommit Manager log file. 2.3.2.20. Memory Ballooning Virtual machines start with the full amount of virtual memory you have assigned to them. As virtual memory usage exceeds RAM, the host relies more on swap space. If enabled, memory ballooning lets virtual machines give up the unused portion of that memory. The freed memory can be reused by other processes and virtual machines on the host. The reduced memory footprint makes swapping less likely and improves performance. The virtio-balloon package that provides the memory balloon device and drivers ships as a loadable kernel module (LKM). By default, it is configured to load automatically. Adding the module to the denyist or unloading it disables ballooning. The memory balloon devices do not coordinate directly with each other; they rely on the host's Memory Overcommit Manager (MoM) process to continuously monitor each virtual machine needs and instruct the balloon device to increase or decrease virtual memory. Performance considerations: Red Hat does not recommend memory ballooning and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Use memory ballooning when increasing virtual machine density (economy) is more important than performance. Memory ballooning does not have a significant impact on CPU utilization. (KSM consumes some CPU resources, but consumption remains consistent under pressure.) To enable memory ballooning, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable Memory Balloon Optimization checkbox. This setting enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the MoM starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. See Cluster Optimization Settings Explained . Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster . 2.3.2.21. Kernel Same-page Merging (KSM) When a virtual machine runs, it often creates duplicate memory pages for items such as common libraries and high-use data. Furthermore, virtual machines that run similar guest operating systems and applications produce duplicate memory pages in virtual memory. When enabled, Kernel Same-page Merging (KSM) examines the virtual memory on a host, eliminates duplicate memory pages, and shares the remaining memory pages across multiple applications and virtual machines. These shared memory pages are marked copy-on-write; if a virtual machine needs to write changes to the page, it makes a copy first before writing its modifications to that copy. While KSM is enabled, the MoM manages KSM. You do not need to configure or control KSM manually. KSM increases virtual memory performance in two ways. Because a shared memory page is used more frequently, the host is more likely to the store it in cache or main memory, which improves the memory access speed. Additionally, with memory overcommitment, KSM reduces the virtual memory footprint, reducing the likelihood of swapping and improving performance. KSM consumes more CPU resources than memory ballooning. The amount of CPU KSM consumes remains consistent under pressure. Running identical virtual machines and applications on a host provides KSM with more opportunities to merge memory pages than running dissimilar ones. If you run mostly dissimilar virtual machines and applications, the CPU cost of using KSM may offset its benefits. Performance considerations: After the KSM daemon merges large amounts of memory, the kernel memory accounting statistics may eventually contradict each other. If your system has a large amount of free memory, you might improve performance by disabling KSM. Red Hat does not recommend KSM and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Use KSM when increasing virtual machine density (economy) is more important than performance. To enable KSM, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable KSM checkbox. This setting enables MoM to run KSM when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. See Cluster Optimization Settings Explained . 2.3.2.22. UEFI and the Q35 chipset The Intel Q35 chipset, the default chipset for new virtual machines, includes support for the Unified Extensible Firmware Interface (UEFI), which replaces legacy BIOS. Alternatively you can configure a virtual machine or cluster to use the legacy Intel i440fx chipset, which does not support UEFI. UEFI provides several advantages over legacy BIOS, including the following: A modern boot loader SecureBoot, which authenticates the digital signatures of the boot loader GUID Partition Table (GPT), which enables disks larger than 2 TB To use UEFI on a virtual machine, you must configure the virtual machine's cluster for 4.4 compatibility or later. Then you can set UEFI for any existing virtual machine, or to be the default BIOS type for new virtual machines in the cluster. The following options are available: Table 2.14. Available BIOS Types BIOS Type Description Q35 Chipset with Legacy BIOS Legacy BIOS without UEFI (Default for clusters with compatibility version 4.4) Q35 Chipset with UEFI BIOS BIOS with UEFI Q35 Chipset with SecureBoot UEFI with SecureBoot, which authenticates the digital signatures of the boot loader Legacy i440fx chipset with legacy BIOS Setting the BIOS type before installing the operating system You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI is not supported after installing an operating system. 2.3.2.23. Configuring a cluster to use the Q35 Chipset and UEFI After upgrading a cluster to Red Hat Virtualization 4.4, all virtual machines in the cluster run the 4.4 version of VDSM. You can configure a cluster's default BIOS type, which determines the default BIOS type of any new virtual machines you create in that cluster. If necessary, you can override the cluster's default BIOS type by specifying a different BIOS type when you create a virtual machine. Procedure In the VM Portal or the Administration Portal, click Compute Clusters . Select a cluster and click Edit . Click General . Define the default BIOS type for new virtual machines in the cluster by clicking the BIOS Type dropdown menu, and selecting one of the following: Legacy Q35 Chipset with Legacy BIOS Q35 Chipset with UEFI BIOS Q35 Chipset with SecureBoot From the Compatibility Version dropdown menu select 4.4 . The Manager checks that all running hosts are compatible with 4.4, and if they are, the Manager uses 4.4 features. If any existing virtual machines in the cluster should use the new BIOS type, configure them to do so. Any new virtual machines in the cluster that are configured to use the BIOS type Cluster default now use the BIOS type you selected. For more information, see Configuring a virtual machine to use the Q35 Chipset and UEFI . Note Because you can change the BIOS type only before installing an operating system, for any existing virtual machines that are configured to use the BIOS type Cluster default , change the BIOS type to the default cluster BIOS type. Otherwise the virtual machine might not boot. Alternatively, you can reinstall the virtual machine's operating system. 2.3.2.24. Configuring a virtual machine to use the Q35 Chipset and UEFI You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI, or from UEFI to legacy BIOS, might prevent the virtual machine from booting. If you change the BIOS type of an existing virtual machine, reinstall the operating system. Warning If the virtual machine's BIOS type is set to Cluster default , changing the BIOS type of the cluster changes the BIOS type of the virtual machine. If the virtual machine has an operating system installed, changing the cluster BIOS type can cause booting the virtual machine to fail. Procedure To configure a virtual machine to use the Q35 chipset and UEFI: In the VM Portal or the Administration Portal click Compute Virtual Machines . Select a virtual machine and click Edit . On the General tab, click Show Advanced Options . Click System Advanced Parameters . Select one of the following from the BIOS Type dropdown menu: Cluster default Q35 Chipset with Legacy BIOS Q35 Chipset with UEFI BIOS Q35 Chipset with SecureBoot Click OK . From the Virtual Machine portal or the Administration Portal, power off the virtual machine. The time you start the virtual machine, it will run with the new BIOS type you selected. 2.3.2.25. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. In a self-hosted engine environment, the Manager virtual machine does not need to be restarted. Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Virtual machines that have not been updated run with the old configuration, and the new configuration could be overwritten if other changes are made to the virtual machine before the reboot. Once you have updated the compatibility version of all clusters and virtual machines in a data center, you can then change the compatibility version of the data center itself.
[ "protocol://[host]:[port]" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-Clusters
Chapter 36. help
Chapter 36. help This chapter describes the commands under the help command. 36.1. help print detailed help for another command Usage: Table 36.1. Positional arguments Value Summary cmd Name of the command Table 36.2. Command arguments Value Summary -h, --help Show this help message and exit
[ "openstack help [-h] [cmd [cmd ...]]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/help
Preface
Preface Red Hat OpenShift Container Storage 4.8 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external Openshift Container Storage clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Container Storage, start with the requirements in Preparing to deploy OpenShift Container Storage chapter and then follow the appropriate deployment process for your environment: Internal mode Deploying OpenShift Container Storage on Red Hat OpenStack Platform in internal mode . External mode Deploying OpenShift Container Storage on Red Hat OpenStack Platform in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/preface-ocs-osp
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.412/providing-direct-documentation-feedback_openjdk
B.17.3.2. Incorrect drive device type
B.17.3.2. Incorrect drive device type Symptom The definition of the source image for the CD-ROM virtual drive is not present, despite being added: Solution Correct the XML by adding the missing <source> parameter as follows: A type='block' disk device expects that the source is a physical device. To use the disk with an image file, use type='file' instead.
[ "virsh dumpxml domain <domain type='kvm'> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> </domain>", "<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/path/to/image.iso'/> <target dev='hdc' bus='ide'/> <readonly/> </disk>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sec-app_xml_errors-incorrect_drive_device_type
Chapter 15. Tuning nodes for low latency with the performance profile
Chapter 15. Tuning nodes for low latency with the performance profile Tune nodes for low latency by using the cluster performance profile. You can restrict CPUs for infra and application containers, configure huge pages, Hyper-Threading, and configure CPU partitions for latency-sensitive processes. 15.1. Creating a performance profile You can create a cluster performance profile by using the Performance Profile Creator (PPC) tool. The PPC is a function of the Node Tuning Operator. The PPC combines information about your cluster with user-supplied configurations to generate a performance profile that is appropriate to your hardware, topology and use-case. Note Performance profiles are applicable only to bare-metal environments where the cluster has direct access to the underlying hardware resources. You can configure performances profiles for both single-node OpenShift and multi-node clusters. The following is a high-level workflow for creating and applying a performance profile in your cluster: Create a machine config pool (MCP) for nodes that you want to target with performance configurations. In single-node OpenShift clusters, you must use the master MCP because there is only one node in the cluster. Gather information about your cluster using the must-gather command. Use the PPC tool to create a performance profile by using either of the following methods: Run the PPC tool by using Podman. Run the PPC tool by using a wrapper script. Configure the performance profile for your use case and apply the performance profile to your cluster. 15.1.1. About the Performance Profile Creator The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, that can help you to create a performance profile for your cluster. Initially, you can use the PPC tool to process the must-gather data to display key performance configurations for your cluster, including the following information: NUMA cell partitioning with the allocated CPU IDs Hyper-Threading node configuration You can use this information to help you configure the performance profile. Running the PPC Specify performance configuration arguments to the PPC tool to generate a proposed performance profile that is appropriate for your hardware, topology, and use-case. You can run the PPC by using one of the following methods: Run the PPC by using Podman Run the PPC by using the wrapper script Note Using the wrapper script abstracts some of the more granular Podman tasks into an executable script. For example, the wrapper script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman. Both methods achieve the same result. 15.1.2. Creating a machine config pool to target nodes for performance tuning For multi-node clusters, you can define a machine config pool (MCP) to identify the target nodes that you want to configure with a performance profile. In single-node OpenShift clusters, you must use the master MCP because there is only one node in the cluster. You do not need to create a separate MCP for single-node OpenShift clusters. Prerequisites You have cluster-admin role access. You installed the OpenShift CLI ( oc ). Procedure Label the target nodes for configuration by running the following command: USD oc label node <node_name> node-role.kubernetes.io/worker-cnf="" 1 1 Replace <node_name> with the name of your node. This example applies the worker-cnf label. Create a MachineConfigPool resource containing the target nodes: Create a YAML file that defines the MachineConfigPool resource: Example mcp-worker-cnf.yaml file apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: "" 3 1 Specify a name for the MachineConfigPool resource. 2 Specify a unique label for the machine config pool. 3 Specify the nodes with the target label that you defined. Apply the MachineConfigPool resource by running the following command: USD oc apply -f mcp-worker-cnf.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-cnf created Verification Check the machine config pools in your cluster by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s 15.1.3. Gathering data about your cluster for the PPC The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. You installed the OpenShift CLI ( oc ). You identified a target MCP that you want to configure with a performance profile. Procedure Navigate to the directory where you want to store the must-gather data. Collect cluster information by running the following command: USD oc adm must-gather The command creates a folder with the must-gather data in your local directory with a naming format similar to the following: must-gather.local.1971646453781853027 . Optional: Create a compressed file from the must-gather directory: USD tar cvaf must-gather.tar.gz <must_gather_folder> 1 1 Replace with the name of the must-gather data folder. Note Compressed output is required if you are running the Performance Profile Creator wrapper script. Additional resources For more information about the must-gather tool, see Gathering data about your cluster . 15.1.4. Running the Performance Profile Creator using Podman As a cluster administrator, you can use Podman with the Performance Profile Creator (PPC) to create a performance profile. For more information about the PPC arguments, see the section "Performance Profile Creator arguments" . Important The PPC uses the must-gather data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the must-gather data before running PPC again. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare-metal hardware. You installed podman and the OpenShift CLI ( oc ). Access to the Node Tuning Operator image. You identified a machine config pool containing target nodes for configuration. You have access to the must-gather data for your cluster. Procedure Check the machine config pool by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m Use Podman to authenticate to registry.redhat.io by running the following command: USD podman login registry.redhat.io Username: <user_name> Password: <password> Optional: Display help for the PPC tool by running the following command: USD podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 -h Example output A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled To display information about the cluster, run the PPC tool with the log argument by running the following command: USD podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --info log --must-gather-dir-path /must-gather --entrypoint performance-profile-creator defines the performance profile creator as a new entry point to podman . -v <path_to_must_gather> specifies the path to either of the following components: The directory containing the must-gather data. An existing directory containing the must-gather decompressed .tar file. --info log specifies a value for the output format. Example output level=info msg="Cluster info:" level=info msg="MCP 'master' nodes:" level=info msg=--- level=info msg="MCP 'worker' nodes:" level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- level=info msg="MCP 'worker-cnf' nodes:" level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- Create a performance profile by running the following command. The example uses sample PPC arguments and values: USD podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml -v <path_to_must_gather> specifies the path to either of the following components: The directory containing the must-gather data. The directory containing the must-gather decompressed .tar file. --mcp-name=worker-cnf specifies the worker-=cnf machine config pool. --reserved-cpu-count=1 specifies one reserved CPU. --rt-kernel=true enables the real-time kernel. --split-reserved-cpus-across-numa=false disables reserved CPUs splitting across NUMA nodes. --power-consumption-mode=ultra-low-latency specifies minimal latency at the cost of increased power consumption. --offlined-cpu-count=1 specifies one offlined CPU. Note The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Example output level=info msg="Nodes targeted by worker-cnf MCP are: [worker-2]" level=info msg="NUMA cell(s): 1" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="1 reserved CPUs allocated: 0 " level=info msg="2 isolated CPUs allocated: 2-3" level=info msg="Additional Kernel Args based on configuration: []" Review the created YAML file by running the following command: USD cat my-performance-profile.yaml Example output --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: "1" reserved: "0" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true Apply the generated profile: USD oc apply -f my-performance-profile.yaml Example output performanceprofile.performance.openshift.io/performance created 15.1.5. Running the Performance Profile Creator wrapper script The wrapper script simplifies the process of creating a performance profile with the Performance Profile Creator (PPC) tool. The script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman. For more information about the Performance Profile Creator arguments, see the section "Performance Profile Creator arguments" . Important The PPC uses the must-gather data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the must-gather data before running PPC again. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare-metal hardware. You installed podman and the OpenShift CLI ( oc ). Access to the Node Tuning Operator image. You identified a machine config pool containing target nodes for configuration. Access to the must-gather tarball. Procedure Create a file on your local machine named, for example, run-perf-profile-creator.sh : USD vi run-perf-profile-creator.sh Paste the following code into the file: #!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename "USD0") readonly CMD="USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator" readonly IMG_EXISTS_CMD="USD{CONTAINER_RUNTIME} image exists" readonly IMG_PULL_CMD="USD{CONTAINER_RUNTIME} image pull" readonly MUST_GATHER_VOL="/must-gather" NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16" MG_TARBALL="" DATA_DIR="" usage() { print "Wrapper usage:" print " USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]" print "" print "Options:" print " -h help for USD{CURRENT_SCRIPT}" print " -p Node Tuning Operator image" print " -t path to a must-gather tarball" USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" && USD{CMD} "USD{NTO_IMG}" -h } function cleanup { [ -d "USD{DATA_DIR}" ] && rm -rf "USD{DATA_DIR}" } trap cleanup EXIT exit_error() { print "error: USD*" usage exit 1 } print() { echo "USD*" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" || USD{IMG_PULL_CMD} "USD{NTO_IMG}" || \ exit_error "Node Tuning Operator image not found" [ -n "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory" [ -f "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file not found" DATA_DIR=USD(mktemp -d -t "USD{CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory" tar -zxf "USD{MG_TARBALL}" --directory "USD{DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball" chmod a+rx "USD{DATA_DIR}" return 0 } main() { while getopts ':hp:t:' OPT; do case "USD{OPT}" in h) usage exit 0 ;; p) NTO_IMG="USD{OPTARG}" ;; t) MG_TARBALL="USD{OPTARG}" ;; ?) exit_error "invalid argument: USD{OPTARG}" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v "USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z" "USD{NTO_IMG}" "USD@" --must-gather-dir-path "USD{MUST_GATHER_VOL}" echo "" 1>&2 } main "USD@" Add execute permissions for everyone on this script: USD chmod a+x run-perf-profile-creator.sh Use Podman to authenticate to registry.redhat.io by running the following command: USD podman login registry.redhat.io Username: <user_name> Password: <password> Optional: Display help for the PPC tool by running the following command: USD ./run-perf-profile-creator.sh -h Example output Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled --enable-hardware-tuning Enable setting maximum CPU frequencies Note You can optionally set a path for the Node Tuning Operator image using the -p option. If you do not set a path, the wrapper script uses the default image: registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 . To display information about the cluster, run the PPC tool with the log argument by running the following command: USD ./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log -t /<path_to_must_gather_dir>/must-gather.tar.gz specifies the path to directory containing the must-gather tarball. This is a required argument for the wrapper script. Example output level=info msg="Cluster info:" level=info msg="MCP 'master' nodes:" level=info msg=--- level=info msg="MCP 'worker' nodes:" level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- level=info msg="MCP 'worker-cnf' nodes:" level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- Create a performance profile by running the following command. USD ./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml This example uses sample PPC arguments and values. --mcp-name=worker-cnf specifies the worker-=cnf machine config pool. --reserved-cpu-count=1 specifies one reserved CPU. --rt-kernel=true enables the real-time kernel. --split-reserved-cpus-across-numa=false disables reserved CPUs splitting across NUMA nodes. --power-consumption-mode=ultra-low-latency specifies minimal latency at the cost of increased power consumption. --offlined-cpu-count=1 specifies one offlined CPUs. Note The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file by running the following command: USD cat my-performance-profile.yaml Example output --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: "1" reserved: "0" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true Apply the generated profile: USD oc apply -f my-performance-profile.yaml Example output performanceprofile.performance.openshift.io/performance created 15.1.6. Performance Profile Creator arguments Table 15.1. Required Performance Profile Creator arguments Argument Description mcp-name Name for MCP; for example, worker-cnf corresponding to the target machines. must-gather-dir-path The path of the must gather directory. This argument is only required if you run the PPC tool by using Podman. If you use the PPC with the wrapper script, do not use this argument. Instead, specify the directory path to the must-gather tarball by using the -t option for the wrapper script. reserved-cpu-count Number of reserved CPUs. Use a natural number greater than zero. rt-kernel Enables real-time kernel. Possible values: true or false . Table 15.2. Optional Performance Profile Creator arguments Argument Description disable-ht Disable Hyper-Threading. Possible values: true or false . Default: false . Warning If this argument is set to true you should not disable Hyper-Threading in the BIOS. Disabling Hyper-Threading is accomplished with a kernel command line argument. enable-hardware-tuning Enable the setting of maximum CPU frequencies. To enable this feature, set the maximum frequency for applications running on isolated and reserved CPUs for both of the following fields: spec.hardwareTuning.isolatedCpuFreq spec.hardwareTuning.reservedCpuFreq This is an advanced feature. If you configure hardware tuning, the generated PerformanceProfile includes warnings and guidance on how to set frequency settings. info This captures cluster information. This argument also requires the must-gather-dir-path argument. If any other arguments are set they are ignored. Possible values: log JSON Default: log . offlined-cpu-count Number of offlined CPUs. Note Use a natural number greater than zero. If not enough logical processors are offlined, then error messages are logged. The messages are: Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1] Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1] power-consumption-mode The power consumption mode. Possible values: default : Performance achieved through CPU partitioning only. low-latency : Enhanced measures to improve latency. ultra-low-latency : Priority given to optimal latency, at the expense of power management. Default: default . per-pod-power-management Enable per pod power management. You cannot use this argument if you configured ultra-low-latency as the power consumption mode. Possible values: true or false . Default: false . profile-name Name of the performance profile to create. Default: performance . split-reserved-cpus-across-numa Split the reserved CPUs across NUMA nodes. Possible values: true or false . Default: false . topology-manager-policy Kubelet Topology Manager policy of the performance profile to be created. Possible values: single-numa-node best-effort restricted Default: restricted . user-level-networking Run with user level networking (DPDK) enabled. Possible values: true or false . Default: false . 15.1.7. Reference performance profiles Use the following reference performance profiles as the basis to develop your own custom profiles. 15.1.7.1. Performance profile template for clusters that use OVS-DPDK on OpenStack To maximize machine performance in a cluster that uses Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you can use a performance profile. You can use the following performance profile template to create a profile for your deployment. Performance profile template for clusters that use OVS-DPDK apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true Insert values that are appropriate for your configuration for the CPU_ISOLATED , CPU_RESERVED , and HUGEPAGES_COUNT keys. 15.1.7.2. Telco RAN DU reference design performance profile The following performance profile configures node-level performance settings for OpenShift Container Platform clusters on commodity hardware to host telco RAN DU workloads. Telco RAN DU reference design performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false 15.1.7.3. Telco core reference design performance profile The following performance profile configures node-level performance settings for OpenShift Container Platform clusters on commodity hardware to host telco core workloads. Telco core reference design performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false 15.2. Supported performance profile API versions The Node Tuning Operator supports v2 , v1 , and v1alpha1 for the performance profile apiVersion field. The v1 and v1alpha1 APIs are identical. The v2 API includes an optional boolean field globallyDisableIrqLoadBalancing with a default value of false . Upgrading the performance profile to use device interrupt processing When you upgrade the Node Tuning Operator performance profile custom resource definition (CRD) from v1 or v1alpha1 to v2, globallyDisableIrqLoadBalancing is set to true on existing profiles. Note globallyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to true it disables IRQ load balancing for the Isolated CPU set. Setting the option to false allows the IRQs to be balanced across all CPUs. Upgrading Node Tuning Operator API from v1alpha1 to v1 When upgrading Node Tuning Operator API version from v1alpha1 to v1, the v1alpha1 performance profiles are converted on-the-fly using a "None" Conversion strategy and served to the Node Tuning Operator with API version v1. Upgrading Node Tuning Operator API from v1alpha1 or v1 to v2 When upgrading from an older Node Tuning Operator API version, the existing v1 and v1alpha1 performance profiles are converted using a conversion webhook that injects the globallyDisableIrqLoadBalancing field with a value of true . 15.3. Configuring node power consumption and realtime processing with workload hints Procedure Create a PerformanceProfile appropriate for the environment's hardware and topology by using the Performance Profile Creator (PPC) tool. The following table describes the possible values set for the power-consumption-mode flag associated with the PPC tool and the workload hint that is applied. Table 15.3. Impact of combinations of power consumption and real-time settings on latency Performance Profile creator setting Hint Environment Description Default workloadHints: highPowerConsumption: false realTime: false High throughput cluster without latency requirements Performance achieved through CPU partitioning only. Low-latency workloadHints: highPowerConsumption: false realTime: true Regional data-centers Both energy savings and low-latency are desirable: compromise between power management, latency and throughput. Ultra-low-latency workloadHints: highPowerConsumption: true realTime: true Far edge clusters, latency critical workloads Optimized for absolute minimal latency and maximum determinism at the cost of increased power consumption. Per-pod power management workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Critical and non-critical workloads Allows for power management per pod. Example The following configuration is commonly used in a telco RAN DU deployment. apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: ... workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1 1 Disables some debugging and monitoring features that can affect system latency. Note When the realTime workload hint flag is set to true in a performance profile, add the cpu-quota.crio.io: disable annotation to every guaranteed pod with pinned CPUs. This annotation is necessary to prevent the degradation of the process performance within the pod. If the realTime workload hint is not explicitly set, it defaults to true . For more information how combinations of power consumption and real-time settings impact latency, see Understanding workload hints . 15.4. Configuring power saving for nodes that run colocated high and low priority workloads You can enable power savings for a node that has low priority workloads that are colocated with high priority workloads without impacting the latency or throughput of the high priority workloads. Power saving is possible without modifications to the workloads themselves. Important The feature is supported on Intel Ice Lake and later generations of Intel CPUs. The capabilities of the processor might impact the latency and throughput of the high priority workloads. Prerequisites You enabled C-states and operating system controlled P-states in the BIOS Procedure Generate a PerformanceProfile with the per-pod-power-management argument set to true : USD podman run --entrypoint performance-profile-creator -v \ /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 \ --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true \ --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node \ --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \ 1 --per-pod-power-management=true > my-performance-profile.yaml 1 The power-consumption-mode argument must be default or low-latency when the per-pod-power-management argument is set to true . Example PerformanceProfile with perPodPowerManagement apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Set the default cpufreq governor as an additional kernel argument in the PerformanceProfile custom resource (CR): apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: ... additionalKernelArgs: - cpufreq.default_governor=schedutil 1 1 Using the schedutil governor is recommended, however, you can use other governors such as the ondemand or powersave governors. Set the maximum CPU frequency in the TunedPerformancePatch CR: spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1 1 The max_perf_pct controls the maximum frequency that the cpufreq driver is allowed to set as a percentage of the maximum supported cpu frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Additional resources About the Performance Profile Creator Disabling power saving mode for high priority pods Managing device interrupt processing for guaranteed pod isolated CPUs 15.5. Restricting CPUs for infra and application containers Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Node Tuning Operator: Table 15.4. Process' CPU assignments Process type Details Burstable and BestEffort pods Runs on any CPU except where low latency workload is running Infrastructure pods Runs on any CPU except where low latency workload is running Interrupts Redirects to reserved CPUs (optional in OpenShift Container Platform 4.7 and later) Kernel processes Pins to reserved CPUs Latency-sensitive workload pods Pins to a specific set of exclusive CPUs from the isolated pool OS processes/systemd services Pins to reserved CPUs The allocatable capacity of cores on a node for pods of all QoS process types, Burstable , BestEffort , or Guaranteed , is equal to the capacity of the isolated pool. The capacity of the reserved pool is removed from the node's total core capacity for use by the cluster and operating system housekeeping duties. Example 1 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 25 cores to QoS Guaranteed pods and 25 cores for BestEffort or Burstable pods. This matches the capacity of the isolated pool. Example 2 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 50 cores to QoS Guaranteed pods and one core for BestEffort or Burstable pods. This exceeds the capacity of the isolated pool by one core. Pod scheduling fails because of insufficient CPU capacity. The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows: If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node. The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In 4.16 and later versions, workloads can optionally be labeled as sensitive. The decision regarding which specific CPUs should be used for reserved and isolated partitions requires detailed analysis and measurements. Factors like NUMA affinity of devices and memory play a role. The selection also depends on the workload architecture and the specific use case. Important The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the spec section of the performance profile. isolated - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth. reserved - Specifies the CPUs for the cluster and operating system housekeeping duties. Threads in the reserved group are often busy. Do not run latency-sensitive applications in the reserved group. Latency-sensitive applications run in the isolated group. Procedure Create a performance profile appropriate for the environment's hardware and topology. Add the reserved and isolated parameters with the CPUs you want reserved and isolated for the infra and application containers: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: "0-4,9" 1 isolated: "5-8" 2 nodeSelector: 3 node-role.kubernetes.io/worker: "" 1 Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties. 2 Specify which CPUs are for application containers to run workloads. 3 Optional: Specify a node selector to apply the performance profile to specific nodes. 15.6. Configuring Hyper-Threading for a cluster To configure Hyper-Threading for an OpenShift Container Platform cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools. Note If you configure a performance profile, and subsequently change the Hyper-Threading configuration for the host, ensure that you update the CPU isolated and reserved fields in the PerformanceProfile YAML to match the new configuration. Warning Disabling a previously enabled host Hyper-Threading configuration can cause the CPU core IDs listed in the PerformanceProfile YAML to be incorrect. This incorrect configuration can cause the node to become unavailable because the listed CPUs can no longer be found. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI (oc). Procedure Ascertain which threads are running on what CPUs for the host you want to configure. You can view which threads are running on the host CPUs by logging in to the cluster and running the following command: USD lscpu --all --extended Example output CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000 In this example, there are eight logical CPU cores running on four physical CPU cores. CPU0 and CPU4 are running on physical Core0, CPU1 and CPU5 are running on physical Core 1, and so on. Alternatively, to view the threads that are set for a particular physical CPU core ( cpu0 in the example below), open a shell prompt and run the following: USD cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list Example output 0-4 Apply the isolated and reserved CPUs in the PerformanceProfile YAML. For example, you can set logical cores CPU0 and CPU4 as isolated , and logical cores CPU1 to CPU3 and CPU5 to CPU7 as reserved . When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. ... cpu: isolated: 0,4 reserved: 1-3,5-7 ... Note The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. Important Hyper-Threading is enabled by default on most Intel processors. If you enable Hyper-Threading, all threads processed by a particular core must be isolated or processed on the same core. When Hyper-Threading is enabled, all guaranteed pods must use multiples of the simultaneous multi-threading (SMT) level to avoid a "noisy neighbor" situation that can cause the pod to fail. See Static policy options for more information. 15.6.1. Disabling Hyper-Threading for low latency applications When configuring clusters for low latency processing, consider whether you want to disable Hyper-Threading before you deploy the cluster. To disable Hyper-Threading, perform the following steps: Create a performance profile that is appropriate for your hardware and topology. Set nosmt as an additional kernel argument. The following example performance profile illustrates this setting: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. 15.7. Managing device interrupt processing for guaranteed pod isolated CPUs The Node Tuning Operator can manage host CPUs by dividing them into reserved CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolated CPUs for application containers to run the workloads. This allows you to set CPUs for low latency workloads as isolated. Device interrupts are load balanced between all isolated and reserved CPUs to avoid CPUs being overloaded, with the exception of CPUs where there is a guaranteed pod running. Guaranteed pod CPUs are prevented from processing device interrupts when the relevant annotations are set for the pod. In the performance profile, globallyDisableIrqLoadBalancing is used to manage whether device interrupts are processed or not. For certain workloads, the reserved CPUs are not always sufficient for dealing with device interrupts, and for this reason, device interrupts are not globally disabled on the isolated CPUs. By default, Node Tuning Operator does not disable device interrupts on isolated CPUs. 15.7.1. Finding the effective IRQ affinity setting for a node Some IRQ controllers lack support for IRQ affinity setting and will always expose all online CPUs as the IRQ mask. These IRQ controllers effectively run on CPU 0. The following are examples of drivers and hardware that Red Hat are aware lack support for IRQ affinity setting. The list is, by no means, exhaustive: Some RAID controller drivers, such as megaraid_sas Many non-volatile memory express (NVMe) drivers Some LAN on motherboard (LOM) network controllers The driver uses managed_irqs Note The reason they do not support IRQ affinity setting might be associated with factors such as the type of processor, the IRQ controller, or the circuitry connections in the motherboard. If the effective affinity of any IRQ is set to an isolated CPU, it might be a sign of some hardware or driver not supporting IRQ affinity setting. To find the effective affinity, log in to the host and run the following command: USD find /proc/irq -name effective_affinity -printf "%p: " -exec cat {} \; Example output /proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2 Some drivers use managed_irqs , whose affinity is managed internally by the kernel and userspace cannot change the affinity. In some cases, these IRQs might be assigned to isolated CPUs. For more information about managed_irqs , see Affinity of managed interrupts cannot be changed even if they target isolated CPU . 15.7.2. Configuring node interrupt affinity Configure a cluster node for IRQ dynamic load balancing to control which cores can receive device interrupt requests (IRQ). Prerequisites For core isolation, all server hardware components must support IRQ affinity. To check if the hardware components of your server support IRQ affinity, view the server's hardware specifications or contact your hardware provider. Procedure Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Set the performance profile apiVersion to use performance.openshift.io/v2 . Remove the globallyDisableIrqLoadBalancing field or set it to false . Set the appropriate isolated and reserved CPUs. The following snippet illustrates a profile that reserves 2 CPUs. IRQ load-balancing is enabled for pods running on the isolated CPU set: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1 ... Note When you configure reserved and isolated CPUs, operating system processes, kernel processes, and systemd services run on reserved CPUs. Infrastructure pods run on any CPU except where the low latency workload is running. Low latency workload pods run on exclusive CPUs from the isolated pool. For more information, see "Restricting CPUs for infra and application containers". 15.8. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. Use the Node Tuning Operator to allocate huge pages on a specific node. OpenShift Container Platform provides a method for creating and allocating huge pages. Node Tuning Operator provides an easier method for doing this using the performance profile. For example, in the hugepages pages section of the performance profile, you can specify multiple blocks of size , count , and, optionally, node : hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 4 node: 0 1 1 node is the NUMA node in which the huge pages are allocated. If you omit node , the pages are evenly spread across all NUMA nodes. Note Wait for the relevant machine config pool status that indicates the update is finished. These are the only configuration steps you need to do to allocate huge pages. Verification To verify the configuration, see the /proc/meminfo file on the node: USD oc debug node/ip-10-0-141-105.ec2.internal # grep -i huge /proc/meminfo Example output AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ## Use oc describe to report the new size: USD oc describe node worker-0.ocp4poc.example.com | grep -i huge Example output hugepages-1g=true hugepages-###: ### hugepages-###: ### 15.8.1. Allocating multiple huge page sizes You can request huge pages with different sizes under the same container. This allows you to define more complicated pods consisting of containers with different huge page size needs. For example, you can define sizes 1G and 2M and the Node Tuning Operator will configure both sizes on the node, as shown here: spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G 15.9. Reducing NIC queues using the Node Tuning Operator The Node Tuning Operator facilitates reducing NIC queues for enhanced performance. Adjustments are made using the performance profile, allowing customization of queues for different network devices. 15.9.1. Adjusting the NIC queues with the performance profile The performance profile lets you adjust the queue count for each network device. Supported network devices: Non-virtual network devices Network devices that support multiple queues (channels) Unsupported network devices: Pure software network interfaces Block devices Intel DPDK virtual functions Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform cluster running the Node Tuning Operator as a user with cluster-admin privileges. Create and apply a performance profile appropriate for your hardware and topology. For guidance on creating a profile, see the "Creating a performance profile" section. Edit this created performance profile: USD oc edit -f <your_profile_name>.yaml Populate the spec field with the net object. The object list can contain two fields: userLevelNetworking is a required field specified as a boolean flag. If userLevelNetworking is true , the queue count is set to the reserved CPU count for all supported devices. The default is false . devices is an optional field specifying a list of devices that will have the queues set to the reserved CPU count. If the device list is empty, the configuration applies to all network devices. The configuration is as follows: interfaceName : This field specifies the interface name, and it supports shell-style wildcards, which can be positive or negative. Example wildcard syntax is as follows: <string> .* Negative rules are prefixed with an exclamation mark. To apply the net queue changes to all devices other than the excluded list, use !<device> , for example, !eno1 . vendorID : The network device vendor ID represented as a 16-bit hexadecimal number with a 0x prefix. deviceID : The network device ID (model) represented as a 16-bit hexadecimal number with a 0x prefix. Note When a deviceID is specified, the vendorID must also be defined. A device that matches all of the device identifiers specified in a device entry interfaceName , vendorID , or a pair of vendorID plus deviceID qualifies as a network device. This network device then has its net queues count set to the reserved CPU count. When two or more devices are specified, the net queues count is set to any net device that matches one of them. Set the queue count to the reserved CPU count for all devices by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices matching any of the defined device identifiers by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - interfaceName: "eth1" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices starting with the interface name eth by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth*" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices with an interface named anything other than eno1 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "!eno1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices that have an interface name eth0 , vendorID of 0x1af4 , and deviceID of 0x1000 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Apply the updated performance profile: USD oc apply -f <your_profile_name>.yaml Additional resources Creating a performance profile . 15.9.2. Verifying the queue status In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied. Example 1 In this example, the net queue count is set to the reserved CPU count (2) for all supported devices. The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status before the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4 Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The combined channel shows that the total count of reserved CPUs for all supported devices is 2. This matches what is configured in the performance profile. Example 2 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices with a specific vendorID . The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4 # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is 2. For example, if there is another network device ens2 with vendorID=0x1af4 it will also have total net queues of 2. This matches what is configured in the performance profile. Example 3 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices that match any of the defined device identifiers. The command udevadm info provides a detailed report on a device. In this example the devices are: # udevadm info -p /sys/class/net/ens4 ... E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4 ... # udevadm info -p /sys/class/net/eth0 ... E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0 ... Set the net queues to 2 for a device with interfaceName equal to eth0 and any devices that have a vendorID=0x1af4 with the following performance profile: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4 ... Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is set to 2. For example, if there is another network device ens2 with vendorID=0x1af4 , it will also have the total net queues set to 2. Similarly, a device with interfaceName equal to eth0 will have total net queues set to 2. 15.9.3. Logging associated with adjusting NIC queues Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the /var/log/tuned/tuned.log file: An INFO message is recorded detailing the successfully assigned devices: INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3 A WARNING message is recorded if none of the devices can be assigned: WARNING tuned.plugins.base: instance net_test: no matching devices available
[ "oc label node <node_name> node-role.kubernetes.io/worker-cnf=\"\" 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: \"\" 3", "oc apply -f mcp-worker-cnf.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-cnf created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_folder> 1", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --info log --must-gather-dir-path /must-gather", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "level=info msg=\"Nodes targeted by worker-cnf MCP are: [worker-2]\" level=info msg=\"NUMA cell(s): 1\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"1 reserved CPUs allocated: 0 \" level=info msg=\"2 isolated CPUs allocated: 2-3\" level=info msg=\"Additional Kernel Args based on configuration: []\"", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" NTO_IMG=\"registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Node Tuning Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" && USD{CMD} \"USD{NTO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" || USD{IMG_PULL_CMD} \"USD{NTO_IMG}\" || exit_error \"Node Tuning Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) NTO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{NTO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled --enable-hardware-tuning Enable setting maximum CPU frequencies", "./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]", "Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "workloadHints: highPowerConsumption: false realTime: false", "workloadHints: highPowerConsumption: false realTime: true", "workloadHints: highPowerConsumption: true realTime: true", "workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.16 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1", "spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "lscpu --all --extended", "CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000", "cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list", "0-4", "cpu: isolated: 0,4 reserved: 1-3,5-7", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true", "find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;", "/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "oc edit -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc apply -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4", "udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3", "WARNING tuned.plugins.base: instance net_test: no matching devices available" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile
Chapter 5. Test cases
Chapter 5. Test cases After finishing the installation, it is recommended to run some basic tests to check the installation and verify how SAP HANA Multitarget System Replication is working and how it recovers from a failure. It is always a good practice to run these test cases before starting production. If possible, you can also prepare a test environment to verify changes before starting production. If possible, you can also prepare a test environment to check the changes before applying them in production. All cases will describe: Subject of the test Test preconditions Test steps Monitoring the test Starting the test Expected result(s) Ways to return to an initial state To automatically register a former primary HANA replication site as a new secondary HANA replication site on the HANA instances that are managed by the cluster, you can use the option AUTOMATED_REGISTER=true in the SAPHana resource. For more details, refer to AUTOMATED_REGISTER . The name of the HA cluster nodes and the HANA replication sites (in brackets) used in the examples are: clusternode1 (DC1) clusternode2 (DC2) remotehost3 (DC3) The following parameters are used for configuring the HANA instances and the cluster: SID=RH2 INSTANCENUMBER=02 CLUSTERNAME=cluster1 You can use clusternode1-2, remotehost3 also as alias in the /etc/hosts in your test environment. The tests are described in more detail, including examples and additional checks of preconditions. At the end, there are examples of how to clean up the environment to be prepared for further testing. In some cases, if the distance between clusternode1-2 and remotehost3 is too long, you should use -replcationMode=async instead of -replicationMode=syncmem . Please also ask your SAP HANA administrator before choosing the right option. 5.1. Prepare the tests Before we run a test, the complete environment needs to be in a correct and healthy state. We have to check the cluster and the database via: pcs status --full python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py df -h An example for pcs status --full can be found in Check cluster status with pcs status . If there are warnings or failures in the "Migration Summary", you should clean up the cluster before you start your test. [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Cluster Cleanup describes some more ways to do it. It is important that the cluster and all the resources be started. Besides the cluster, the database should also be up and running and in sync. The easiest way to verify the proper status of the database is to check the system replication status. See also Replication Status . This should be checked on the primary database. To discover the primary node, you can check Discover Primary Database or use: [root@clusternode1]# pcs status | grep -E "Promoted|Master" [root@clusternode1]# hdbnsutil -sr_stateConfiguration Check if there is enough space on the file systems by running: [root@clusternode1]# df -h Please also follow the guidelines for a system check before you continue. If the environment is clean, it is ready to run the tests. 5.2. Monitor the environment In this section, we are focusing on monitoring the environment during the tests. This section will only cover the necessary monitors to see the changes. It is recommended to run these monitors from dedicated terminals. To be able to detect changes during the test, it is recommended to start monitoring before starting the test. In the Useful Commands section, more examples are shown. 5.2.1. Discover the primary node You need to discover the primary node to monitor a failover or run certain commands that only provide information about the replication status when executed on the primary node. To discover the primary node, you can run the following commands as the <sid>adm user: clusternode1:rh2adm> watch -n 5 'hdbnsutil -sr_stateConfiguration | egrep -e "primary masters|^mode"' Output example, when clusternode2 is the primary database: mode: syncmem primary masters: clusternode2 Output on the node that runs the primary database is: mode: primary 5.2.2. Check the Replication status The replication status shows the relationship between primary and secondary database nodes and the current status of the replication. To discover the replication status, you can run as the <sid>adm user: clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration If you want to permanently monitor changes in the system replication status, please run the following command: clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' This example also determines the current return code. As long as the return code (status) is 15, the replication status is fine. The other return codes are: 10: NoHSR 11: Error 12: Unknown 13: Initializing 14: Syncing 15: Active If you register a new secondary, you can run it in a separate window on the primary node and you will see the progress of the replication. If you want to monitor a failover, you can run it in parallel on the old primary as well as on the new primary database server. For more information, please read Check SAP HANA System Replication Status . 5.2.3. Check /var/log/messages entries Pacemaker is writing a lot of information into the /var/log/messages file. During a failover, a huge number of messages are written into this message file. To be able to follow only the important messages depending on the SAP HANA resource agent, it is useful to filter the detailed activities of the pacemaker SAP resources. It is enough to check the messages file on a single cluster node. For example, you can use this alias: [root@clusternode1]# alias tmsl='tail -1000f /var/log/messages | egrep -s "Setting master-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT"' Run tmsl in a separate window to monitor the progress of the test. Please also check the example Monitor failover and sync state . 5.2.4. Cluster status There are several ways to check the cluster status. Check if the cluster is running: pcs cluster status Check the cluster and all resources: pcs status Check the cluster, all resources and all node attributes: pcs status --full Check the resources only: pcs resource The pcs status --full command will give you all the necessary information. To monitor changes, you can run this command together with watch: [root@clusternode1]# watch pcs status --full An output example and further options can be found in Check cluster status . 5.2.5. Discover leftovers To ensure that your environment is ready to run the test, leftovers from tests need to be fixed or removed. stonith is used to fence a node in the cluster: Detect: [root@clusternode1]# pcs stonith history Fix: [root@clusternode1]# pcs stonith cleanup Multiple primary databases: Detect: clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration | grep -i primary All nodes with the same primary need to be identified. Fix: re-register the wrong primary with option --force_full_replica Location Constraints caused by move: Detect: [root@clusternode1]# pcs constraint location Check the warning section. Fix: [root@clusternode1]# pcs resource clear <clone-resource-which was moved> Secondary replication relationship: Detect: on the primary database run clusternode1:rh2adm> python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py Fix: unregister and re-register the secondary databases. Check siteReplicationMode (same output on all SAP HANA nodes) clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode Pcs property: Detect: [root@clusternode1]# pcs property config Fix: [root@clusternode1]# pcs property set <key=value> Clear maintenance-mode . [root@clusternode1]# pcs property set maintenance-mode=false log_mode: Detect: clusternode1:rh2adm> python systemReplicationStatus.py Will respond in the replication status that log_mode normally is required. log_mode can be detected as described in Using hdbsql to check Inifile contents . Fix: change the log_mode to normal and restart the primary database. CIB entries: Detect: SFAIL entries in the cluster information base. Please refer to Check cluster consistency , to find and remove CIB entries. Cleanup/clear: Detect: [root@clusternode1]# pcs status --full Sometimes it shows errors or warnings. You can cleanup/clear resources and if everything is fine, nothing happens. Before running the test, you can cleanup your environment. Examples to fix: [root@clusternode1]# pcs resource clear <name-of-the-clone-resource> [root@clusternode1]# pcs resource cleanup <name-of-the-clone-resource> This is also useful if you want to check if there is an issue in an existing environment. For more information, please refer to Useful commands . 5.3. Test 1: Failover of the primary node with an active third site Subject of the test Automatic re-registration of the third site. Sync state changes to SOK after clearing. Test preconditions SAP HANA on DC1, DC2, DC3 are running. Cluster is up and running without errors or warnings. Test steps Move the SAPHana resource using the pcs [root@clusternode1]# resource move <sap-clone-resource> <target-node> command. Monitoring the test On the third site run as rh2adm the command provided at the end of table.(*) On the secondary node run as root: [root@clusternode1]# watch pcs status --full Starting the test Execute the cluster command: [root@clusternode1]# pcs move resource SAPHana_RH2_02-clone <target-node> [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Expected result In the monitor command on site 3 the primary master changes from clusternode1 to clusternode2. After clearing the resource the sync state will change from SFAIL to SOK . Ways to return to an initial state Run the test twice. (*) remotehost3:rh2adm> watch hdbnsutil -sr_state [root@clusternode1]# tail -1000f /var/log/messages |egrep -e 'SOK|SWAIT|SFAIL' Detailed description Check the initial state of your cluster as root on clusternode1 or clusternode2. [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled This output shows you that HANA is promoted on clusternode1, which is the primary SAP HANA server and that the name of the clone resource is SAPHana_RH2_02-clone which is promotable. You can run this in a separate window during the test to see changes: [root@clusternode1]# watch pcs status --full Another way to identify the name of the SAP HANA clone resource is: [root@clusternode2]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * Started: [ clusternode1 clusternode2 ] * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * Promoted: [ clusternode2 ] * Unpromoted: [ clusternode1 ] To see the change of the primary server start monitoring on remotehost3 on a separate terminal window before you start the test. remotehost3:rh2adm> watch 'hdbnsutil -sr_state | grep "primary masters" The output will look like: Every 2.0s: hdbnsutil -sr_state | grep "primary masters" remotehost3: Mon Sep 4 08:47:21 2023 primary masters: clusternode1 During the test the expected output will change to clusternode2. Start the test by moving the clone resource discovered above to clusternode2: [root@clusternode1]# pcs resource move SAPhana_RH2_02-clone clusternode2 The output of the monitor on remotehost3 will change to: Every 2.0s: hdbnsutil -sr_state | grep "primary masters" remotehost3: Mon Sep 4 08:50:31 2023 primary masters: clusternode2 Pacemaker creates a location constraint for moving the clone resource. This needs to be manually removed. You can see the constraint using: [root@clusternode1]# pcs constraint location This constraint needs to be removed. Clear the clone resource to remove the location constraint: [root@clusternode1]# pcs resource clear SAPhana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone Cleanup the resource: [root@clusternode1]# pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done) Result of the test The "primary masters" monitor on remotehost3 should show an immediate switch to the new primary node. If you check the cluster status, the former secondary will be promoted, the former primary gets re-register and the Clone_State changes from Promoted to Undefined to WAITINGFORLPA to DEMOTED . The secondary will change the sync_state to SFAIL when the SAPHana monitor is started for the first time after the failover. Because of existing location constraints the resource needs to be cleared and after a short time the sync_state of the secondary will change to SOK again. Secondary gets promoted. To restore the initial state you can simply run the test. After finishing the tests please run a cleanup . 5.4. Test 2: Failover of the primary node with passive third site Subject of the test No re-registration of the stopped third site. Failover works even if the third site is down. Test preconditions SAP HANA on DC1, DC2 is running and is stopped on DC3. Cluster is up and running without errors or warnings. Test steps Move the SAPHana resource using the pcs [root@clusternode1]# resource move command. Starting the test Execute the cluster command: [root@clusterclusternode1]# pcs move resource SAPHana_RH2_02-clone clusterclusternode1 Expected result No change on DC3. SAP HANA System Replication stays on old relationship. Ways to return to an initial state Re-register DC3 on new primary and start SAP HANA. Detailed description Check the initial state of your cluster as root on clusternode1 or clusternode2: [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled The output of this example shows you that HANA is promoted on clusternode1, which is the primary SAP HANA server, and that the name of the clone resource is SAPHana_RH2_02-clone , which is promotable. If you run test 3 before HANA, you might be promoted on clusternode2. Stop the database on remotehost3: remotehost3:rh2adm> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function Stop 400 12.07.2023 11:33:14 Stop OK Waiting for stopped instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function WaitforStopped 600 2 12.07.2023 11:33:30 WaitforStopped OK hdbdaemon is stopped. Check the primary database on remotehost3: remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i "primary masters" primary masters: clusterclusternode2 Check the current primary in the cluster on a cluster node: [root@clusterclusternode1]# pcs resource | grep Masters * Masters: [ clusternode2 ] Check the sr_state to see the SAP HANA System Replication relationships: clusternode2:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 2 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC3] remotehost3 clusternode1 -> [DC1] clusternode1 clusternode1 -> [DC2] clusternode2 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC3 (syncmem/logreplay) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC3: 2 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC3: syncmem Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC3: logreplay Operation mode of DC2: logreplay Mapping: DC1 -> DC3 Mapping: DC1 -> DC2 done. The SAP HANA System Replication relations still have one primary (DC1), which is replicated to DC2 and DC3. The replication relationship on remotehost3, which is down, can be displayed using: remothost3:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 3 site name: DC3 active primary site: 1 primary masters: clusternode1 done. The database on remotehost3 which is offline checks the entries in the global.ini file. Starting the test: Initiate a failover in the cluster, moving the SAPHana-clone-resource example: [root@clusternode1]# pcs resource move SAPHana_RH2_02-clone clusternode2 Note If SAPHana is promoted on clusternode2, you have to move the clone resource to clusternode1. The example expects that SAPHana is promoted on clusternode1. There will be no output. Similar to the former test a location constraint will be created, which can be display with: [root@clusternode1]# pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) Even if the cluster looks fine again, this constraint avoids another failover unless the constraint is removed. One way is to clear the resource. Clear the resource: [root@clusternode1]# pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) [root@clusternode1]# pcs resource clear SAPHana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone Cleanup the resource: [root@clusternode1]# pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done) Check the current status. There are three ways to display the replication status which needs to be in sync. Starting with the primary on remotehost3: remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i primary active primary site: 1 primary masters: clusternode1 The output shows site 1 or clusternode1 which was the primary before starting the test to move the primary to clusternode2. check the system replication status on the new primary. First detect the new primary: [root@clusternode1]# pcs resource | grep Master * Masters: [ clusternode2 ] Here we have an inconsistency which requires to re-register remotehost3. You might think, if we run the test again, we might switch the primary back to the original clusternode1. In this case, we have a third way to identify if system replication is working. On the primary node, which is our case clusternode2 run: If you don't see remotehost3 in this output, you have to re-register remotehost3. Before registering, please run the following on the primary node to watch the progress of the registration: clusternode2:rh2adm> watch python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py Now you can re-register remotehost3 using this command: remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode2 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC2 --operation Mode=logreplay --online adding site ... collecting information ... updating local ini files ... done. Even if the database on remotehost3 is not started yet, you are able to see the third site in the system replication status output. The registration can be finished by starting the database on remotehost3: remotehost3:rh2adm> HDB start StartService Impromptu CCC initialization by 'rscpCInit'. See SAP note 1266393. OK OK Starting instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function StartWait 2700 2 04.09.2023 11:36:47 Start OK The monitor started above will immediately show the synchronization of remotehost3. To switch back, run the test again. One optional test is to switch the primary to the node, which is configured on the global.ini on remotehost3 and then starting the database. The database might come up, but it will never be shown in the output of the system replication status, unless it is re-registered. For more information please also check Check SAP HANA System Replication status . 5.5. Test 3: Failover of the primary node to the third site Subject of the test Failover the primary to the third site. Secondary will be re-registered to third site. Test preconditions SAP HANA on DC1, DC2, DC3 is running. Cluster is up and running without errors or warnings. System Replication is in place and in sync (check python systemReplicationStatus.py ). Test steps Put the cluster into maintenance-mode to be able to recover. Takeover the HANA database form the third node using: hdbnsutil -sr_takeover Starting the test Execute the SAP HANA commandon remotehost3: rh2adm>: hdbnsutil -sr_takeover Monitoring the test On the third site run: remotehost3:rh2adm> watch hdbnsutil -sr_state Expected result Third node will become primary. Secondary node will change the primary master to remotehost3. Former primary node needs to be re-registered to the new primary. Ways to return to an initial state Run Test 4: Failback of the primary node to the first site . Detailed description Check if the databases are running using Check database and check the replication status: clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" The output is for example: mode: syncmem primary masters: clusternode1 In this case the primary database is clusternode1. If you run this command on clusternode1 you will get: mode: primary On this primary node you can also display the system replication status. It should look like: Now we have a proper environment and we can start monitoring the system replication status on all 3 nodes in separate windows. The 3 monitors should be started before the test is started. The output will change when the test is executed. So keep them running as long as the test is not completed. On the old primary node, clusternode1 ran in a separate window during the test: clusternode1:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?` The output on clusternode1 will be: On remotehost3 run the same command: remotehost3:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' The response will be: this system is either not running or not primary system replication site The output will change after the test initiates the failover. The output looks similar to the example of the primary node before the test was started. On the second node start: clusternode2:rh2adm> watch -n 10 'hdbnsutil -sr_state | grep masters' This will show the current master clusternode1 and will switch immediately after the failover is initiated. To ensure that everything is configured right, please also check the global.ini . Check global.ini on DC1, DC2 and DC3. On all three nodes the global.ini should contain: [persistent] log_mode=normal [system_replication] register_secondaries_on_takeover=true You can edit the global.ini with for example: clusternode1:rh2adm>vim /usr/sap/USDSAPSYSTEMNAME/SYS/global/hdb/custom/config/global.ini [Optional] Put the cluster into maintenance-mode : [root@clusternode1]# pcs property set maintenance-mode=true During the tests you will find out that the failover will work with and without setting the maintenance-mode . So you can run the first test without it. While recovering it should be done, I just want to show you, it works with and without. Which is an option in terms of the primary is not accessible. Start the test: Failover to DC3. On remotehost3 please run: remotehost3:rh2adm> hdbnsutil -sr_takeover done. The test has started, and now please check the output of the previously started monitors. On the clusternode1 the system replication status will lose its relationship to remotehost3 and clusternode2 (DC2): The cluster still doesn't notice this behavior. If you check the return code of the system replication status, Returncode 11 means error, which tells you something is wrong. If you have access, it is a good idea to enter maintenance-mode now. The remotehost3 becomes the new primary and clusternode2 (DC2) gets automatically registered as the new primary on remotehost3. Example output of the system replication state of remotehost3: The returncode 15 also says everything is okay but clusternode1 is missing. This must be re-registered manually. The former primary clusternode1 is not listed, so the replication relationship is lost. Set maintenance-mode . If not already done before set maintenance-mode on the cluster on one node of the cluster with the command: [root@clusternode1]# pcs property set maintenance-mode=true You can check if the maintenance-mode is active by running this command: [root@clusternode1]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 (unmanaged) The resources are displaying unmanaged, this indicates that the cluster is in maintenance-mode=true . The virtual IP address is still started on clusternode1. If you want to use this IP on another node, please disable vip_RH2_02_MASTER before you set maintanence-mode=true . [root@clusternode1]# pcs resource disable vip_RH2_02_MASTER Re-register clusternode1 When we check the sr_state on clusternode1 you will see a relationship only to DC2: clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done. But when we check DC2, the primary database server is DC3. So the information from DC1 is not correct. clusternode2:rh2adm> hdbnsutil -sr_state If we check the system replication status on DC1 the returncode is 12 which is unknown. So DC1 needs to be re-registered. You can use this command to register the former primary clusternode1 as a new secondary of remotehost3: clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC1 --remoteName=DC3 --operationMode=logreplay --online After the registration is done, you will see on remotehost3 all three sites replicated and the status (return code) will change to 15. If this fails, you have to manually remove the replication relationships on DC1 and DC3. Please follow the instructions described in Register Secondary . For example list the existing relations with: hdbnsutil -sr_state Example to remove the existing relations you can use: clusternode1:rh2adm> hdbnsutil -sr_unregister --name=DC2 This may usually not be necessary. We assume that test 4 will be performed after test 3. So the recovery step is to run test 4. 5.6. Test 4: Failback of the primary node to the first site Subject of the test Primary switch back to a cluster node. Failback and enable the cluster again. Re-register the third site as secondary. Test preconditions SAP HANA primary node is running on third site. Cluster is partly running. Cluster is put into maintenance-mode . Former cluster primary is accessible. Test steps Check the expected primary of the cluster. Failover from the DC3 node to the DC1 node. Check if the former secondary has switched to the new primary. Re-register remotehost3 as new secondary. Set cluster maintenance-mode=false and the cluster continues to work. Monitoring the test On the new primary start: remotehost3:rh2adm> watch python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py [root@clusternode1]# watch pcs status --full On the secondary start: clusternode:rh2adm> watch hdbnsutil -sr_state Starting the test Check the expected primary of the cluster: [root@clusternode1]# pcs resource VIP and promoted SAP HANA resources should run on the same node which is the potential new primary. On this potential primary run: clusternode1:rh2adm> hdbnsutil -sr_takeover Re-register the former primary as new secondary: clusternode1:rh2adm> hdbnsutil -sr_register \ --remoteHost=clusternode1 \ --remoteInstance=USD{TINSTANCE} \ --replicationMode=syncmem \ --name=DC3 \ --remoteName=DC1 \ --operationMode=logreplay \ --force_full_replica \ --online Cluster continues to work after setting the maintenance-mode=false . Expected result New primary is starting SAP HANA. The replication status will show all 3 sites replicated. Second cluster site gets automatically re-registered to the new primary. The Disaster Recovery (DR) site becomes an additional replica of the database. Ways to return to an initial state Run test 3. Detailed description Check if the cluster is put into maintenance-mode : [root@clusternode1]# pcs property config maintenance-mode Cluster Properties: maintenance-mode: true If the maintenance-mode is not true you can set it with: [root@clusternode1]# pcs property set maintenance-mode=true Check system replication status and discover the primary database on all nodes. First of all discover the primary database using: clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" The output should be as follows. On clusternode1: clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: syncmem primary masters: remotehost3 On clusternode2: clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: syncmem primary masters: remotehost3 On remotehost3: remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e "^mode:|primary masters" mode: primary On all three nodes the primary database is remotehost3. On this primary database, you have to ensure, that the system replication status is active for all three nodes and the return code is 15: Check all three sr_states are consistent. Please run on all three nodes hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode : clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode The output should be the same on all nodes: siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay Start the monitoring in separate windows. On clusternode1 start: clusternode1:rh2adm> watch "python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \USD?" On remotehost3 start: remotehost3:rh2adm> watch "python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \USD?" On clusternode2 start: clusternode2:rh2adm> watch "hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode" Start the test To failover to clusternode1 start on clusternode1: clusternode1:rh2adm> hdbnsutil -sr_takeover done. Check the output of the monitors. The monitor on clusternode1 will change to: Important is also the return code 15. The monitor on clusternode2 will change to: Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC2=logreplay DC3 is gone and needs to be re-registered. On remotehost3 the systemReplicationStatus reports an error and the returncode changes to 11. Check if cluster nodes gets re-registered. clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done. The Site Mappings shows, clusternode2 (DC2) was re-registered. Check or enable the vip resource: [root@clusternode1]# pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged) The vip resource vip_RH2_02_MASTER is stopped. To start it again run: [root@clusternode1]# pcs resource enable vip_RH2_02_MASTER Warning: 'vip_RH2_02_MASTER' is unmanaged The warning is right, because the cluster will not start any resource unless maintenance-mode=false . Stop cluster maintenance-mode . Before we stop the maintenance-mode we should start two monitors in separate windows to see the changes. On clusternode2 run: [root@clusternode2]# watch pcs status --full On clusternode1 run: clusternode1:rh2adm> watch "python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo USD?" Now you can unset the maintenance-mode on clusternode1 by running: [root@clusternode1]# pcs property set maintenance-mode=false The monitor on clusternode2 should show you that everything is running now as expected: Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023 Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:01:17 2023 * Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693872030 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1 : Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled After manual interaction it is always a good advice to cleanup the cluster like described in Cluster Cleanup . Re-register remotehost3 to the new primary on clusternode1. Remotehost3 needs to be re-registered. To monitor the progress please start on clusternode1: clusternode1:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?' On remotehost3 please start: remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode' Now you can re-register remotehost3 with this command: remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online The monitor on clusternode1 will change to: And the monitor of remotehost3 will change to: Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay Now we have again 3 entries and remotehost3 (DC3) is again a secondary site replicated from clusternode1 (DC1). Check if all nodes are part of the system replication status on clusternode1. Please run on all three nodes hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode : clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode On all nodes we should get the same output: siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay Check pcs status --full and SOK . Run: The output should be either PRIM or SOK : * hana_rh2_sync_state : PRIM * hana_rh2_sync_state : SOK Finally the cluster status should look like this including the sync_state PRIM and SOK : [root@clusternode1]# pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:18:52 2023 * Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693873014 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1 : Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled Refer to Check cluster status and Check database , to verify that all works fine again.
[ "pcs resource clear SAPHana_RH2_02-clone", "pcs status | grep -E \"Promoted|Master\" hdbnsutil -sr_stateConfiguration", "df -h", "clusternode1:rh2adm> watch -n 5 'hdbnsutil -sr_stateConfiguration | egrep -e \"primary masters|^mode\"'", "mode: syncmem primary masters: clusternode2", "mode: primary", "clusternode1:rh2adm> hdbnsutil -sr_stateConfiguration", "clusternode1:rh2adm> watch -n 5 'python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "alias tmsl='tail -1000f /var/log/messages | egrep -s \"Setting master-rsc_SAPHana_USDSAPSYSTEMNAME_HDBUSD{TINSTANCE}|sr_register|WAITING4LPA|PROMOTED|DEMOTED|UNDEFINED|master_walk|SWAIT|WaitforStopped|FAILED|LPT\"'", "watch pcs status --full", "remotehost3:rh2adm> watch hdbnsutil -sr_state tail -1000f /var/log/messages |egrep -e 'SOK|SWAIT|SFAIL'", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "watch pcs status --full", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * Started: [ clusternode1 clusternode2 ] * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * Promoted: [ clusternode2 ] * Unpromoted: [ clusternode1 ]", "remotehost3:rh2adm> watch 'hdbnsutil -sr_state | grep \"primary masters\"", "Every 2.0s: hdbnsutil -sr_state | grep \"primary masters\" remotehost3: Mon Sep 4 08:47:21 2023 primary masters: clusternode1", "pcs resource move SAPhana_RH2_02-clone clusternode2", "Every 2.0s: hdbnsutil -sr_state | grep \"primary masters\" remotehost3: Mon Sep 4 08:50:31 2023 primary masters: clusternode2", "pcs constraint location", "pcs resource clear SAPhana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone", "pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done)", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Mon Sep 4 06:34:46 2023 * Last change: Mon Sep 4 06:33:04 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693809184 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1: Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "remotehost3:rh2adm> HDB stop hdbdaemon will wait maximal 300 seconds for NewDB services finishing. Stopping instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function Stop 400 12.07.2023 11:33:14 Stop OK Waiting for stopped instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function WaitforStopped 600 2 12.07.2023 11:33:30 WaitforStopped OK hdbdaemon is stopped.", "remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i \"primary masters\" primary masters: clusterclusternode2", "pcs resource | grep Masters * Masters: [ clusternode2 ]", "clusternode2:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 2 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC3] remotehost3 clusternode1 -> [DC1] clusternode1 clusternode1 -> [DC2] clusternode2 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC3 (syncmem/logreplay) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC3: 2 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC3: syncmem Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC3: logreplay Operation mode of DC2: logreplay Mapping: DC1 -> DC3 Mapping: DC1 -> DC2 done.", "remothost3:rh2adm> hdbnsutil -sr_stateConfiguration System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 3 site name: DC3 active primary site: 1 primary masters: clusternode1 done.", "pcs resource move SAPHana_RH2_02-clone clusternode2", "pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started)", "pcs constraint location Location Constraints: Resource: SAPHana_RH2_02-clone Enabled on: Node: clusternode1 (score:INFINITY) (role:Started) pcs resource clear SAPHana_RH2_02-clone Removing constraint: cli-prefer-SAPHana_RH2_02-clone", "pcs resource cleanup SAPHana_RH2_02-clone Cleaned up SAPHana_RH2_02:0 on clusternode2 Cleaned up SAPHana_RH2_02:1 on clusternode1 Waiting for 1 reply from the controller ... got reply (done)", "remotehost3:rh2adm> hdbnsutil -sr_stateConfiguration| grep -i primary active primary site: 1 primary masters: clusternode1", "pcs resource | grep Master * Masters: [ clusternode2 ]", "clusternode2:rh2adm> cdpy clusternode2:rh2adm> python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode2 |30201 |nameserver | 1 | 2 |DC2 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode2 |30207 |xsengine | 2 | 2 |DC2 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode2 |30203 |indexserver | 3 | 2 |DC2 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"1\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 2 site name: DC2", "clusternode2:rh2adm> watch python USDDIR_EXECUTABLE/python_support/systemReplicationStatus.py", "remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode2 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC2 --operation Mode=logreplay --online adding site collecting information updating local ini files done.", "remotehost3:rh2adm> HDB start StartService Impromptu CCC initialization by 'rscpCInit'. See SAP note 1266393. OK OK Starting instance using: /usr/sap/RH2/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 02 -function StartWait 2700 2 04.09.2023 11:36:47 Start OK", "clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\"", "mode: syncmem primary masters: clusternode1", "mode: primary", "clusternode1:rh2adm> cdpy clusternode1:rh2adm> python systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |SYNCMEM |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1", "clusternode1:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?`", "Every 5.0s: python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicati... clusternode1: Tue XXX XX HH:MM:SS 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary | Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status | Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- | ----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES | ASYNC |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES | SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 15", "remotehost3:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "this system is either not running or not primary system replication site", "clusternode2:rh2adm> watch -n 10 'hdbnsutil -sr_state | grep masters'", "[persistent] log_mode=normal [system_replication] register_secondaries_on_takeover=true", "clusternode1:rh2adm>vim /usr/sap/USDSAPSYSTEMNAME/SYS/global/hdb/custom/config/global.ini", "pcs property set maintenance-mode=true", "remotehost3:rh2adm> hdbnsutil -sr_takeover done.", "Every 5.0s: python /usr/sap/RH2/HDB02/exe/python_support/systemReplicationStatus.py ; echo Status USD? clusternode1: Mon Sep 4 11:52:16 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replic ation |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |------ ---------------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ERROR |Commun ication channel closed | False | status system replication site \"2\": ERROR overall system replication status: ERROR Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 11", "Every 5.0s: python /usr/sap/RH2/HDB02/exe/python_support/systemReplicationStatus.py ; echo Status USD? remotehost3: Mon Sep 4 13:55:29 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replic ation |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |------ -------- |------------ | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 3 site name: DC3 Status 15", "pcs property set maintenance-mode=true", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 (unmanaged)", "pcs resource disable vip_RH2_02_MASTER", "clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done.", "clusternode2:rh2adm> hdbnsutil -sr_state", "clusternode1:rh2adm> hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC1 --remoteName=DC3 --operationMode=logreplay --online", "hdbnsutil -sr_state", "clusternode1:rh2adm> hdbnsutil -sr_unregister --name=DC2", "pcs property config maintenance-mode Cluster Properties: maintenance-mode: true", "pcs property set maintenance-mode=true", "clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\"", "clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: syncmem primary masters: remotehost3", "clusternode2:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: syncmem primary masters: remotehost3", "remotehost3:rh2adm> hdbnsutil -sr_state | egrep -e \"^mode:|primary masters\" mode: primary", "remotehost3:rh2adm> python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |SYSTEMDB |remotehost3 |30201 |nameserver | 1 | 3 |DC3 |clusternode1 | 30201 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30207 |xsengine | 2 | 3 |DC3 |clusternode1 | 30207 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | |RH2 |remotehost3 |30203 |indexserver | 3 | 3 |DC3 |clusternode1 | 30203 | 1 |DC1 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE status system replication site \"1\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 3 site name: DC3 echo USD? 15", "clusternode1:rh2adm>hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode", "clusternode2:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "remotehost3:rh2adm>hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "siteReplicationMode/DC1=primary siteReplicationMode/DC3=async siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> watch \"python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \\USD?\"", "remotehost3:rh2adm> watch \"python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo \\USD?\"", "clusternode2:rh2adm> watch \"hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode\"", "clusternode1:rh2adm> hdbnsutil -sr_takeover done.", "Every 2.0s: python systemReplicationStatus.py; echo USD? clusternode1: Mon Sep 4 23:34:30 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 15", "Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode clusternode2: Mon Sep 4 23:35:18 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> hdbnsutil -sr_state System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ online: true mode: primary operation mode: primary site id: 1 site name: DC1 is source system: true is secondary/consumer system: false has secondaries/consumers attached: true is a takeover active: false is primary suspended: false Host Mappings: ~~~~~~~~~~~~~~ clusternode1 -> [DC2] clusternode2 clusternode1 -> [DC1] clusternode1 Site Mappings: ~~~~~~~~~~~~~~ DC1 (primary/primary) |---DC2 (syncmem/logreplay) Tier of DC1: 1 Tier of DC2: 2 Replication mode of DC1: primary Replication mode of DC2: syncmem Operation mode of DC1: primary Operation mode of DC2: logreplay Mapping: DC1 -> DC2 done.", "pcs resource * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02] (unmanaged): * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 (unmanaged) * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 (unmanaged) * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable, unmanaged): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode2 (unmanaged) * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode1 (unmanaged) * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Stopped (disabled, unmanaged)", "pcs resource enable vip_RH2_02_MASTER Warning: 'vip_RH2_02_MASTER' is unmanaged", "watch pcs status --full", "clusternode1:rh2adm> watch \"python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py; echo USD?\"", "pcs property set maintenance-mode=false", "Every 2.0s: pcs status --full clusternode1: Tue Sep 5 00:01:17 2023 Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:01:17 2023 * Last change: Tue Sep 5 00:00:30 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693872030 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1 : Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "clusternode1:rh2adm> watch -n 5 'python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD?'", "remotehost3:rh2adm> watch 'hdbnsutil -sr_state --sapcontrol=1 |grep siteReplicationMode'", "remotehost3:rh2adm> hdbnsutil -sr_register --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --name=DC3 --remoteName=DC1 --operationMode=logreplay --online", "Every 5.0s: python /usr/sap/USDSAPSYSTEMNAME/HDBUSD{TINSTANCE}/exe/python_support/systemReplicationStatus.py ; echo Status USD? clusternode1: Tue Sep 5 00:14:40 2023 |Database |Host |Port |Service Name |Volume ID |Site ID |Site Name |Secondary |Secondary |Secondary |Secondary |Secondary |Replication |Replication |Replication |Secondary | | | | | | | | |Host |Port |Site ID |Site Name |Active Status |Mode |Status |Status Details |Fully Synced | |-------- |------ |----- |------------ |--------- |------- |--------- |--------- |--------- |--------- |--------- |------------- |----------- |----------- |-------------- |------------ | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |remotehost3 | 30201 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |remotehost3 | 30207 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |remotehost3 | 30203 | 3 |DC3 |YES |ASYNC |ACTIVE | | True | |SYSTEMDB |clusternode1 |30201 |nameserver | 1 | 1 |DC1 |clusternode2 | 30201 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30207 |xsengine | 2 | 1 |DC1 |clusternode2 | 30207 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | |RH2 |clusternode1 |30203 |indexserver | 3 | 1 |DC1 |clusternode2 | 30203 | 2 |DC2 |YES |SYNCMEM |ACTIVE | | True | status system replication site \"3\": ACTIVE status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 Status 15", "Every 2.0s: hdbnsutil -sr_state --sapcontrol=1 |grep site.*Mode remotehost3: Tue Sep 5 02:15:28 2023 siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "clusternode1:rh2adm> hdbnsutil -sr_state --sapcontrol=1 |grep site.*ModesiteReplicationMode", "clusternode2:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "remotehost3:rh2adm> hsbnsutil -sr_state --sapcontrol=1 | grep site.*Mode", "siteReplicationMode/DC1=primary siteReplicationMode/DC3=syncmem siteReplicationMode/DC2=syncmem siteOperationMode/DC1=primary siteOperationMode/DC3=logreplay siteOperationMode/DC2=logreplay", "pcs status --full| grep sync_state", "* hana_rh2_sync_state : PRIM * hana_rh2_sync_state : SOK", "pcs status --full Cluster name: cluster1 Cluster Summary: * Stack: corosync * Current DC: clusternode1 (1) (version 2.1.2-4.el8_6.6-ada5c3b36e2) - partition with quorum * Last updated: Tue Sep 5 00:18:52 2023 * Last change: Tue Sep 5 00:16:54 2023 by root via crm_attribute on clusternode1 * 2 nodes configured * 6 resource instances configured Node List: * Online: [ clusternode1 (1) clusternode2 (2) ] Full List of Resources: * auto_rhevm_fence1 (stonith:fence_rhevm): Started clusternode1 * Clone Set: SAPHanaTopology_RH2_02-clone [SAPHanaTopology_RH2_02]: * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode2 * SAPHanaTopology_RH2_02 (ocf::heartbeat:SAPHanaTopology): Started clusternode1 * Clone Set: SAPHana_RH2_02-clone [SAPHana_RH2_02] (promotable): * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Slave clusternode2 * SAPHana_RH2_02 (ocf::heartbeat:SAPHana): Master clusternode1 * vip_RH2_02_MASTER (ocf::heartbeat:IPaddr2): Started clusternode1 Node Attributes: * Node: clusternode1 (1): * hana_rh2_clone_state : PROMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode2 * hana_rh2_roles : 4:P:master1:master:worker:master * hana_rh2_site : DC1 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : PRIM * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode1 * lpa_rh2_lpt : 1693873014 * master-SAPHana_RH2_02 : 150 * Node: clusternode2 (2): * hana_rh2_clone_state : DEMOTED * hana_rh2_op_mode : logreplay * hana_rh2_remoteHost : clusternode1 * hana_rh2_roles : 4:S:master1:master:worker:master * hana_rh2_site : DC2 * hana_rh2_sra : - * hana_rh2_srah : - * hana_rh2_srmode : syncmem * hana_rh2_sync_state : SOK * hana_rh2_version : 2.00.062.00 * hana_rh2_vhost : clusternode2 * lpa_rh2_lpt : 30 * master-SAPHana_RH2_02 : 100 Migration Summary: Tickets: PCSD Status: clusternode1 : Online clusternode2: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_test_cases_v8-configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
Part II. Notable Bug Fixes
Part II. Notable Bug Fixes This part describes bugs fixed in Red Hat Enterprise Linux 7.3 that have a significant impact on users.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug-fixes
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC In OpenShift Container Platform version 4.12, you can install a cluster into an existing Alibaba Virtual Private Cloud (VPC) on Alibaba Cloud Services. The installation program provisions the required infrastructure, which can then be customized. To customize the VPC installation, modify the parameters in the 'install-config.yaml' file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 6.2. Using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in the Alibaba Cloud Platform. By deploying OpenShift Container Platform into an existing Alibaba VPC, you can avoid limit constraints in new accounts and more easily adhere to your organization's operational constraints. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking using vSwitches. 6.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The vSwitches must be within the machine network. The installation program does not create the following components: VPC vSwitches Route table NAT gateway Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.2.2. VPC validation To ensure that the vSwitches you provide are suitable, the installation program confirms the following data: All the vSwitches that you specify must exist. You have provided one or more vSwitches for control plane machines and compute machines. The vSwitches' CIDRs belong to the machine CIDR that you specified. 6.2.3. Division of permissions Some individuals can create different resources in your cloud than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components, such as VPCs or vSwitches. 6.2.4. Isolation between clusters If you deploy OpenShift Container Platform into an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.5.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.5.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.5.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.5.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.5.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.5.2.4. Additional Alibaba Cloud configuration parameters Additional Alibaba Cloud configuration parameters are described in the following table. The alibabacloud parameters are the configuration used when installing on Alibaba Cloud. The defaultMachinePlatform parameters are the default configuration used when installing on Alibaba Cloud for machine pools that do not define their own platform configuration. These parameters apply to both compute machines and control plane machines where specified. Note If defined, the parameters compute.platform.alibabacloud and controlPlane.platform.alibabacloud will overwrite platform.alibabacloud.defaultMachinePlatform settings for compute machines and control plane machines respectively. Table 6.4. Optional Alibaba Cloud parameters Parameter Description Values compute.platform.alibabacloud.imageID The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. String. compute.platform.alibabacloud.instanceType InstanceType defines the ECS instance type. Example: ecs.g6.large String. compute.platform.alibabacloud.systemDiskCategory Defines the category of the system disk. Examples: cloud_efficiency , cloud_essd String. compute.platform.alibabacloud.systemDisksize Defines the size of the system disk in gibibytes (GiB). Integer. compute.platform.alibabacloud.zones The list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. controlPlane.platform.alibabacloud.imageID The imageID used to create the ECS instance. ImageID must belong to the same region as the cluster. String. controlPlane.platform.alibabacloud.instanceType InstanceType defines the ECS instance type. Example: ecs.g6.xlarge String. controlPlane.platform.alibabacloud.systemDiskCategory Defines the category of the system disk. Examples: cloud_efficiency , cloud_essd String. controlPlane.platform.alibabacloud.systemDisksize Defines the size of the system disk in gibibytes (GiB). Integer. controlPlane.platform.alibabacloud.zones The list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. platform.alibabacloud.region Required. The Alibaba Cloud region where the cluster will be created. String. platform.alibabacloud.resourceGroupID The ID of an already existing resource group where the cluster will be installed. If empty, the installation program will create a new resource group for the cluster. String. platform.alibabacloud.tags Additional keys and values to apply to all Alibaba Cloud resources created for the cluster. Object. platform.alibabacloud.vpcID The ID of an already existing VPC where the cluster should be installed. If empty, the installation program will create a new VPC for the cluster. String. platform.alibabacloud.vswitchIDs The ID list of already existing VSwitches where cluster resources will be created. The existing VSwitches can only be used when also using existing VPC. If empty, the installation program will create new VSwitches for the cluster. String list. platform.alibabacloud.defaultMachinePlatform.imageID For both compute machines and control plane machines, the image ID that should be used to create ECS instance. If set, the image ID should belong to the same region as the cluster. String. platform.alibabacloud.defaultMachinePlatform.instanceType For both compute machines and control plane machines, the ECS instance type used to create the ECS instance. Example: ecs.g6.xlarge String. platform.alibabacloud.defaultMachinePlatform.systemDiskCategory For both compute machines and control plane machines, the category of the system disk. Examples: cloud_efficiency , cloud_essd . String, for example "", cloud_efficiency , cloud_essd . platform.alibabacloud.defaultMachinePlatform.systemDiskSize For both compute machines and control plane machines, the size of the system disk in gibibytes (GiB). The minimum is 120 . Integer. platform.alibabacloud.defaultMachinePlatform.zones For both compute machines and control plane machines, the list of availability zones that can be used. Examples: cn-hangzhou-h , cn-hangzhou-j String list. platform.alibabacloud.privateZoneID The ID of an existing private zone into which to add DNS records for the cluster's internal API. An existing private zone can only be used when also using existing VPC. The private zone must be associated with the VPC containing the subnets. Leave the private zone unset to have the installation program create the private zone on your behalf. String. 6.5.3. Sample customized install-config.yaml file for Alibaba Cloud You can customize the installation configuration file ( install-config.yaml ) to specify more details about your cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' 8 sshKey: | ssh-rsa AAAA... 9 1 Required. The installation program prompts you for a cluster name. 2 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 3 Optional. Specify parameters for machine pools that do not define their own platform configuration. 4 Required. The installation program prompts you for the region to deploy the cluster to. 5 Optional. Specify an existing resource group where the cluster should be installed. 8 Required. The installation program prompts you for the pull secret. 9 Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. 6 7 Optional. These are example vswitchID values. 6.5.4. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 6.5.5. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.5.6. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set the USDRELEASE_IMAGE variable by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 USDRELEASE_IMAGE 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on Alibaba Cloud 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4 1 The Machine API Operator CR is required. 2 The Image Registry Operator CR is required. 3 The Ingress Operator CR is required. 4 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir> where: <name> is the name used to tag any cloud resources that are created for tracking. <alibaba_region> is the Alibaba Cloud region in which cloud resources will be created. <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files for the component CredentialsRequest objects. <path_to_ccoctl_output_dir> is the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output: openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 6.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.9. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 6.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console 6.11. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE", "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4", "ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>", "2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml", "cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_alibaba/installing-alibaba-vpc
Chapter 6. Plug-in implemented server functionality reference
Chapter 6. Plug-in implemented server functionality reference This chapter contains reference information on plug-ins. The configuration for each part of Directory Server plug-in functionality has its own separate entry and set of attributes under the subtree cn=plugins,cn=config . Some of these attributes are common to all plug-ins while others may be particular to a specific plug-in. You can check which attributes a given plug-in uses by performing an ldapsearch on the cn=config subtree. All plug-ins are instances of the nsSlapdPlugin object class inherited from the extensibleObject object class. Server takes into account plug-in configuration attributes when both object classes (in addition to the top object class) are present in the entry, as shown in the following example: 6.1. List of attributes common to all plug-ins This list provides a brief attribute description, the entry DN, valid range, default value, syntax, and an example for each attribute. Each Directory Server plug-in belongs to the nsslapdPlugin object class. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.41 Table 6.1. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. cn Gives the common name of the entry. nsslapd-pluginPath Identifies the plugin library name (without the library suffix). nsslapd-pluginInitfunc Identifies an initialization function of the plugin. nsslapd-pluginType Identifies the type of plugin. nsslapd-pluginId Identifies the plugin ID. nsslapd-pluginVersion Identifies the version of plugin. nsslapd-pluginVendor Identifies the vendor of plugin. nsslapd-pluginDescription Identifies the description of the plugin. nsslapd-pluginEnabled Identifies whether or not the plugin is enabled. nsslapd-pluginPrecedence Sets the priority for the plug-in in the execution order. 6.1.1. nsslapd-logAccess This attribute enables you to log search operations run by the plug-in to the file set in the nsslapd-accesslog parameter in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAccess: Off 6.1.2. nsslapd-logAudit This attribute enables you to log and audit modifications to the database originated from the plug-in. Successful modification events are logged in the audit log, if the nsslapd-auditlog-logging-enabled parameter is enabled in cn=config . To log failed modification database operations by a plug-in, enable the nsslapd-auditfaillog-logging-enabled attribute in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAudit: Off 6.1.3. nsslapd-pluginDescription This attribute provides a description of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-pluginDescription: acl access check plug-in 6.1.4. nsslapd-pluginEnabled This attribute specifies whether the plug-in is enabled. This attribute can be changed over protocol but will only take effect when the server is restarted. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-pluginEnabled: on 6.1.5. nsslapd-pluginId This attribute specifies the plug-in ID. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in ID Default Value None Syntax DirectoryString Example nsslapd-pluginId: chaining database 6.1.6. nsslapd-pluginInitfunc This attribute specifies the plug-in function to be initiated. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in function Default Value None Syntax DirectoryString Example nsslapd-pluginInitfunc: NS7bitAttr_Init 6.1.7. nsslapd-pluginPath This attribute specifies the full path to the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid path Default Value None Syntax DirectoryString Example nsslapd-pluginPath: uid-plugin 6.1.8. nsslapd-pluginPrecedence This attribute sets the precedence or priority for the execution order of a plug-in. Precedence defines the execution order of plug-ins, which allows more complex environments or interactions since it can enable a plug-in to wait for a completed operation before being executed. This is more important for pre-operation and post-operation plug-ins. Plug-ins with a value of 1 have the highest priority and are run first; plug-ins with a value of 99 have the lowest priority. The default is 50. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values 1 to 99 Default Value 50 Syntax Integer Example nsslapd-pluginPrecedence: 3 6.1.9. nsslapd-pluginType This attribute specifies the plug-in type. See Section 6.2.4, "nsslapd-plugin-depends-on-type" for further information. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in type Default Value None Syntax DirectoryString Example nsslapd-pluginType: preoperation 6.1.10. nsslapd-pluginVendor This attribute specifies the vendor of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any approved plug-in vendor Default Value Red Hat, Inc. Syntax DirectoryString Example nsslapd-pluginVendor: Red Hat, Inc. 6.1.11. nsslapd-pluginVersion This attribute specifies the plug-in version. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in version Default Value Product version number Syntax DirectoryString Example nsslapd-pluginVersion: {VER} 6.2. Optional attributes of certain plug-ins 6.2.1. nsslapd-dynamic-plugins You can enable some Directory Server plug-ins dynamically without the instance restart. Enable the nsslapd-dynamic-plugins attribute in Directory Server to allow the dynamic plug-ins. By default, dynamic plug-ins are disabled. Warning Red Hat Directory Server does not support dynamic plug-ins. Use it only for testing and debugging purposes. You cannot configure some plug-ins as dynamic. To enable such plug-ins, restart the instance. Plug-in Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-dynamic-plugins: on 6.2.2. nsslapd-pluginConfigArea Some plug-in entries are container entries, and multiple instances of the plug-in are created beneath this container in cn=plugins,cn=config . However, the cn=plugins,cn=config is not replicated, which means that the plug-in configurations beneath those container entries must be configured manually, in some way, on every Directory Server instance. The nsslapd-pluginConfigArea attribute points to another container entry, in the main database area, which contains the plug-in instance entries. This container entry can be in a replicated database, which allows the plug-in configuration to be replicated. Plug-in Parameter Description Entry DN cn= plug-in name ,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DN Example nsslapd-pluginConfigArea: cn=managed entries container,ou=containers,dc=example,dc=com 6.2.3. nsslapd-plugin-depends-on-named Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the cn value of a plug-in. The plug-in with a cn value matching one of the following values will be started by the server prior to this plug-in. If the plug-in does not exist, the server fails to start. The following postoperation Referential Integrity Plug-in example shows that the Views plug-in is started before Roles. If Views is missing, the server is not going to start. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values Class of Service Default Value Syntax DirectoryString Example * nsslapd-plugin-depends-on-named: Views * nsslapd-pluginId: roles 6.2.4. nsslapd-plugin-depends-on-type Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the type number of a plug-in, contained in the attribute nsslapd-pluginType . See Section 6.1.9, "nsslapd-pluginType" for further information. All plug-ins with a type value which matches one of the values in the following valid range will be started by the server prior to this plug-in. The following postoperation Referential Integrity Plug-in example shows that the database plug-in will be started prior to the postoperation Referential Integrity Plug-in. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values database Default Value Syntax DirectoryString Example nsslapd-plugin-depends-on-type: database 6.2.5. nsslapd-pluginLoadGlobal This attribute specifies whether the symbols in dependent libraries are made visible locally ( false ) or to the executable and to all shared objects ( true ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadGlobal: false 6.2.6. nsslapd-pluginLoadNow This attribute specifies whether to load all of the symbols used by a plug-in immediately ( true ), as well as all symbols references by those symbols, or to load the symbol the first time it is used ( false ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadNow: false 6.3. Server plug-in functionality reference This section provides an overview of the plug-ins provided with Directory Server, along with their configurable options, configurable arguments, default setting, dependencies, general performance-related information, and further reading. 6.3.1. 7-bit Check plug-in Plug-in Parameter Description Plug-in ID NS7bitAtt DN of Configuration Entry cn=7-bit check,cn=plugins,cn=config Description Checks certain attributes are 7-bit clean Type preoperation Configurable Options on | off Default Setting on Configurable Arguments List of attributes ( uid mail userpassword ) followed by "," and then suffixes on which the check is to occur. Dependencies Database Performance-Related Information None Further Information If Directory Server uses non-ASCII characters, such as Japanese, turn this plug-in off. 6.3.2. Account Policy plug-in Account policies can be set that automatically lock an account after a certain amount of time has elapsed. This can be used to create temporary accounts that are only valid for a preset amount of time or to lock users which have been inactive for a certain amount of time. The Account Policy Plug-in itself only accept on argument, which points to a plug-in configuration entry. dn: cn=Account Policy Plugin,cn=plugins,cn=config ... nsslapd-pluginarg0: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config The account policy configuration entry defines, for the entire server, what attributes to use for account policies. Most of the configuration defines attributes to use to evaluate account policies and expiration times, but the configuration also defines what object class to use to identify subtree-level account policy definitions. dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: config ... attributes for evaluating accounts ... alwaysRecordLogin: yes stateattrname: lastLoginTime altstateattrname: createTimestamp ... attributes for account policy entries ... specattrname: acctPolicySubentry limitattrname: accountInactivityLimit One the plug-in is configured globally, account policy entries can be created within the user subtrees, and then these policies can be applied to users and to roles through classes of service. Example 6.1. Account Policy Definition dn: cn=AccountPolicy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy # 86400 seconds per day * 30 days = 2592000 seconds accountInactivityLimit: 2592000 cn: AccountPolicy Any entry, both individual users and roles or CoS templates, can be an account policy subentry. Every account policy subentry has its creation and login times tracked against any expiration policy. Example 6.2. User Account with Account Policy dn: uid=scarter,ou=people,dc=example,dc=com ... lastLoginTime: 20060527001051Z acctPolicySubentry: cn=AccountPolicy,dc=example,dc=com Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Account Policy Plugin,cn=plugins,cn=config Description Defines a policy to lock user accounts after a certain expiration period or inactivity period. Type object Configurable Options on | off Default Setting off Configurable Arguments A pointer to a configuration entry which contains the global account policy settings. Dependencies Database Performance-Related Information None Further Information This plug-in configuration points to a configuration entry which is used for server-wide settings on account inactivity and expiration data. Individual (subtree-level or user-level) account policies can be defined as directory entries, as instances of the acctPolicySubentry object class. These configuration entries can then be applied to users or roles through classes of service. 6.3.2.1. altstateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . However, there may be instances where that attribute does not exist on an entry, such as a user who never logged into his account. The altstateattrname attribute provides a backup attribute for the server to reference to evaluate the expiration time. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example altstateattrname: createTimeStamp 6.3.2.2. alwaysRecordLogin By default, only entries which have an account policy directly applied to them - meaning, entries with the acctPolicySubentry attribute - have their login times tracked. If account policies are applied through classes of service or roles, then the acctPolicySubentry attribute is on the template or container entry, not the user entries themselves. The alwaysRecordLogin attribute sets that every entry records its last login time. This allows CoS and roles to be used to apply account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range yes | no Default Value no Syntax DirectoryString Example alwaysRecordLogin: no 6.3.2.3. alwaysRecordLoginAttr The Account Policy plug-in uses the attribute name set in the alwaysRecordLoginAttr parameter to store the time of the last successful login in this attribute in the user's directory entry. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any valid attribute name Default Value stateAttrName Syntax DirectoryString Example alwaysRecordLoginAttr: lastLoginTime 6.3.2.4. lastLoginHistSize To maintain a history of successful logins, you can use the lastLoginHistSize attribute that determines the number of logins to store and stores the last five successful logins by default. For the lastLoginHistSize attribute to stores the last logins, you must enable the alwaysRecordLogin attribute. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range 0 (Disable) to the maximum 32 bit integer value (2147483647) Default Value 5 Syntax Integer Example lastloginhistorysize: 10 6.3.2.5. limitattrname The account policy entry in the user directory defines the time limit for the account lockout policy. This time limit can be set in any time-based attribute, and a policy entry could have multiple time-based attributes in ti. The attribute within the policy to use for the account inactivation limit is defined in the limitattrname attribute in the Account Policy Plug-in, and it is applied globally to all account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example limitattrname: accountInactivityLimit 6.3.2.6. specattrname There are really two configuration entries for an account policy: the global settings in the plug-in configuration entry and then yser- or subtree-level settings in an entry within the user directory. An account policy can be set directly on a user entry or it can be set as part of a CoS or role configuration. The way that the plug-in identifies which entries are account policy configuration entries is by identifying a specific attribute on the entry which flags it as an account policy. This attribute in the plug-in configuration is is specattrname ; its will usually be set to acctPolicySubentry . Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example specattrname: acctPolicySubentry 6.3.2.7. stateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . The primary time attribute used to evaluate an account policy is set in the stateattrname attribute. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example stateattrname: lastLoginTime 6.3.3. Account Usability plug-in Plug-in Parameter Description Plug-in ID acctusability DN of Configuration Entry cn=Account Usability Plugin,cn=plugins,cn=config Description Checks the authentication status, or usability, of an account without actually authenticating as the given user Type preoperation Configurable Options on | off Default Setting on Dependencies Database Performance-Related Information None 6.3.4. ACL plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL Plugin,cn=plugins,cn=config Description ACL access check plug-in Type accesscontrol Configurable Options on | off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. 6.3.5. ACL Preoperation plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL preoperation,cn=plugins,cn=config Description ACL access check plug-in Type preoperation Configurable Options on | off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. 6.3.6. AD DN plug-in The AD DN plug-in supports multiple domain configurations. Create one configuration entry for each domain. Plug-in Parameter Description Plug-in ID addn DN of Configuration Entry cn=addn,cn=plugins,cn=config Description Enables the usage of Active Directory-formatted user names, such as user_name and user_name @ domain , for bind operations. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments addn_default_domain : Sets the default domain that is automatically appended to user names without domain. Dependencies None Performance-Related Information None 6.3.6.1. addn_base Sets the base DN under which Directory Server searches the user's DN. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_base: ou=People,dc=example,dc=com 6.3.6.2. addn_filter Sets the search filter. Directory Server replaces the %s variable automatically with the non-domain part of the authenticating user. For example, if the user name in the bind is [email protected] , the filter searches the corresponding DN which is (&(objectClass=account)(uid=user_name)) . Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_filter: (&(objectClass=account)(uid=%s)) 6.3.6.3. cn Sets the domain name of the configuration entry. The plug-in uses the domain name from the authenticating user name to select the corresponding configuration entry. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any string Default Value None Syntax DirectoryString Example cn: example.com 6.3.7. Alias Entries plug-in The Alias Entries plug-in checks the base entry for the object class alias and the aliasedObjectName attribute that contains a DN to another entry (an alias to another entry). During a search, the plug-in modifies the search base DN to this aliased DN. The Alias Entries plug-in supports only base level searches. Use the ldapsearch -a find command to retrieve entries with aliases. For the plug-in to return the aliased entry, the base entry must contain the following information: The alias object class. The aliasedObjectName attribute (known as the aliasedEntryName attribute in X.500) with a DN value pointing to another entry. Directory Server can return to the client the following errors: Error 32 (no such object) if the alias DN is missing. Error 53 (unwilling to perform) if the search is a non-base level search. Dereferencing is the conversion of an alias name to an object name. The process may require the examination of more than one alias entry. An alias entry may point to an entry that is not a leaf entry. An entry in the DIT may have multiple alias names, and several alias entries may point to the same entry. Example 6.3. An Entry with an alias dn: cn=Barbara Jensen,ou=Engineering,dc=example,dc=com objectClass: top objectClass: alias objectClass: extensibleObject cn: Barbara Jensen aliasedObjectName: cn=Barbara Smith,ou=Engineering,dc=example,dc=com Plug-in Parameter Description Plug-in ID Alias Entries DN of Configuration Entry cn=Alias Entries, cn=plugins, cn=config Description Checks the base entry for alias object class and aliasedObjectName attribute, during base level searches Type object Configurable Options on | off Default Setting off Configurable Arguments Alias entries belong to the alias object class. The aliasedObjectName attribute stores the DN of the entry that an alias points to. Dependencies Database Performance-Related Information Every alias entry must belong to the alias object class and have no subordinates. Further Information The aliasedObjectName attribute is known as the aliasedEntryName attribute in X.500. The distinguishedNameMatch matching rule and the DistinguishedName syntax are defined in RFC 4517 . 6.3.8. Attribute Uniqueness plug-in The Attribute Uniqueness plug-in ensures that the value of an attribute is unique across the directory or subtree. Plug-in Parameter Description Plug-in ID NSUniqueAttr DN of Configuration Entry cn=Attribute Uniqueness,cn=plugins,cn=config Description Checks that the values of specified attributes are unique each time a modification occurs on an entry. For example, most sites require that a user ID and email address be unique. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments To check for UID attribute uniqueness in all listed subtrees, enter uid "DN" "DN"... . However, to check for UID attribute uniqueness when adding or updating entries with the requiredObjectClass , enter attribute="uid" MarkerObjectclass = "ObjectClassName" and, optionally requiredObjectClass = "ObjectClassName" . This starts checking for the required object classes from the parent entry containing the ObjectClass as defined by the MarkerObjectClass attribute. Dependencies Database Performance-Related Information Directory Server provides the UID Uniqueness Plug-in by default. To ensure unique values for other attributes, create instances of the Attribute Uniqueness Plug-in for those attributes. The UID Uniqueness Plug-in is off by default due to operation restrictions that need to be addressed before enabling the plug-in in a multi-supplier replication environment. Turning the plug-in on may slow down Directory Server performance. 6.3.8.1. cn Sets the name of the Attribute Uniqueness plug-in configuration record. You can use any string, but Red Hat recommends naming the configuration record attribute_name Attribute Uniqueness . Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid string Default Value None Syntax DirectoryString Example cn: mail Attribute Uniqueness 6.3.8.2. uniqueness-across-all-subtrees If enabled ( on ), the plug-in checks that the attribute is unique across all subtrees set. If you set the attribute to off , uniqueness is only enforced within the subtree of the updated entry. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example uniqueness-across-all-subtrees: off 6.3.8.3. uniqueness-attribute-name Sets the name of the attribute whose values must be unique. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example uniqueness-attribute-name: mail 6.3.8.4. uniqueness-exclude-subtrees Sets the DN under which the plug-in skips uniqueness verification of the attribute's value. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values Any valid subtree DN Default Value None Syntax DirectoryString Example uniqueness-exclude-subtrees: dc=private,dc=people,dc=example,dc=com 6.3.8.5. uniqueness-subtree-entries-oc Optionally, when using the uniqueness-top-entry-oc parameter, you can configure that the Attribute Uniqueness plug-in only verifies if an attribute is unique, if the entry contains the object class set in this parameter. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-subtree-entries-oc: inetOrgPerson 6.3.8.6. uniqueness-subtrees Sets the DN under which the plug-in checks for uniqueness of the attribute's value. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values Any valid subtree DN Default Value None Syntax DirectoryString Example uniqueness-subtrees: ou=Sales,dc=example,dc=com 6.3.8.7. uniqueness-top-entry-oc Directory Server searches this object class in the parent entry of the updated object. If it was not found, the search continues at the higher level entry up to the root of the directory tree. If the object class was found, Directory Server verifies that the value of the attribute set in uniqueness-attribute-name is unique in this subtree. Parameter Description Entry DN cn= attribute_uniqueness_configuration_entry_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-top-entry-oc: nsContainer 6.3.9. Auto Membership plug-in Automembership essentially allows a static group to act like a dynamic group. Different automembership definitions create searches that are automatically run on all new directory entries. The automembership rules search for and identify matching entries - much like the dynamic search filters - and then explicitly add those entries as members to the specified static group. The Auto Membership Plug-in itself is a container entry. Each automember definition is a child of the Auto Membership Plug-in. The automember definition defines the LDAP search base and filter to identify entries and a default group to add them to. dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ipHost autoMemberDefaultGroup: cn=systems,cn=hostgroups,ou=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn Each automember definition can have its own child entry that defines additional conditions for assigning the entry to group. Regular expressions can be used to include or exclude entries and assign them to specific groups based on those conditions. dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com If the entry matches the main definition and not any of the regular expression conditions, then it uses the group in the main definition. If it matches a regular expression condition, then it is added to the regular expression condition group. Plug-in Parameter Description Plug-in ID Auto Membership DN of Configuration Entry cn=Auto Membership,cn=plugins,cn=config Description Container entry for automember definitions. Automember definitions search new entries and, if they match defined LDAP search filters and regular expression conditions, add the entry to a specified group automatically. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments None for the main plug-in entry. The definition entry must specify an LDAP scope, LDAP filter, default group, and member attribute format. The optional regular expression child entry can specify inclusive and exclusive expressions and a different target group. Dependencies Database Performance-Related Information None. 6.3.9.1. autoMemberDefaultGroup This attribute sets a default or fallback group to add the entry to as a member. If only the definition entry is used, then this is the group to which all matching entries are added. If regular expression conditions are used, then this group is used as a fallback if an entry which matches the LDAP search filter do not match any of the regular expressions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any existing Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberDefaultGroup: cn=hostgroups,ou=groups,dc=example,dc=com 6.3.9.2. autoMemberDefinition (object class) This attribute identifies the entry as an automember definition. This entry must be a child of the Auto Membership Plug-in, cn=Auto Membership Plugin,cn=plugins,cn=config . Allowed Attributes autoMemberScope autoMemberFilter autoMemberDefaultGroup autoMemberGroupingAttr 6.3.9.3. autoMemberExclusiveRegex This attribute sets a single regular expression to use to identify entries to exclude . If an entry matches the exclusion condition, then it is not included in the group. Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is excluded in the group. The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Note Exclude conditions are evaluated first and take precedence over include conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberExclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 6.3.9.4. autoMemberFilter This attribute sets a standard LDAP search filter to use to search for matching entries. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any valid LDAP search filter Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberFilter:objectclass=ntUser 6.3.9.5. autoMemberGroupingAttr This attribute gives the name of the member attribute in the group entry and the attribute in the object entry that supplies the member attribute value, in the format group_member_attr:entry_attr . This structures how the Automembership Plug-in adds a member to the group, depending on the group configuration. For example, for a groupOfUniqueNames user group, each member is added as a uniqueMember attribute. The value of uniqueMember is the DN of the user entry. In essence, each group member is identified by the attribute-value pair of uniqueMember: user_entry_DN . The member entry format, then, is uniqueMember:dn . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberGroupingAttr: member:dn 6.3.9.6. autoMemberInclusiveRegex This attribute sets a single regular expression to use to identify entries to include . Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is included in the group (assuming it does not match an exclude expression). The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberInclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 6.3.9.7. autoMemberProcessModifyOps By default, Directory Server invokes the Automembership plug-in for add and modify operations. With this setting, the plug-in changes groups when you add a group entry to a user or modify a group entry of a user. If you set the autoMemberProcessModifyOps to off , Directory Server only invokes the Automembership plug-in when you add a group entry to a user. In this case, if an administrator changes a user entry, and that entry impactes what Automembership groups the user belongs to, the plug-in does not remove the user from the old group and only adds the new group. To update the old group, you must then manually run a fix-up task. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Values on | off Default Value on Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberProcessModifyOps: on 6.3.9.8. autoMemberRegexRule (object class) This attribute identifies the entry as a regular expression rule. This entry must be a child of an automember definition ( objectclass: autoMemberDefinition ). Allowed Attributes autoMemberInclusiveRegex autoMemberExclusiveRegex autoMemberTargetGroup 6.3.9.9. autoMemberScope This attribute sets the subtree DN to search for entries. This is the search base. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server subtree Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberScope: dc=example,dc=com 6.3.9.10. autoMemberTargetGroup This attribute sets which group to add the entry to as a member, if it meets the regular expression conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberTargetGroup: cn=webservers,cn=hostgroups,ou=groups,dc=example,dc=com 6.3.10. Binary Syntax plug-in Warning Binary syntax is deprecated. Use Octet String syntax instead. Plug-in Parameter Description Plug-in ID bin-syntax DN of Configuration Entry cn=Binary Syntax,cn=plugins,cn=config Description Syntax for handling binary data. Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.11. Bit String Syntax plug-in Plug-in Parameter Description Plug-in ID bitstring-syntax DN of Configuration Entry cn=Bit String Syntax,cn=plugins,cn=config Description Supports bit string syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.12. Bitwise plug-in Plug-in Parameter Description Plug-in ID bitwise DN of Configuration Entry cn=Bitwise Plugin,cn=plugins,cn=config Description Matching rule for performing bitwise operations against the LDAP server Type matchingrule Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.13. Boolean Syntax plug-in Plug-in Parameter Description Plug-in ID boolean-syntax DN of Configuration Entry cn=Boolean Syntax,cn=plugins,cn=config Description Supports boolean syntax values (TRUE or FALSE) and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.14. Case Exact String Syntax plug-in Plug-in Parameter Description Plug-in ID ces-syntax DN of Configuration Entry cn=Case Exact String Syntax,cn=plugins,cn=config Description Supports case-sensitive matching or Directory String, IA5 String, and related syntaxes. This is not a case-exact syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.15. Case Ignore String Syntax plug-in Plug-in Parameter Description Plug-in ID directorystring-syntax DN of Configuration Entry cn=Case Ignore String Syntax,cn=plugins,cn=config Description Supports case-insensitive matching rules for Directory String, IA5 String, and related syntaxes. This is not a case-insensitive syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.16. Chaining Database plug-in Plug-in Parameter Description Plug-in ID chaining database DN of Configuration Entry cn=Chaining database,cn=plugins,cn=config Description Enables back end databases to be linked Type database Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information There are many performance related tuning parameters involved with the chaining database. Further Information A chaining database is also known as a database link . 6.3.17. Class of Service plug-in Plug-in Parameter Description Plug-in ID cos DN of Configuration Entry cn=Class of Service,cn=plugins,cn=config Description Allows for sharing of attributes between entries Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Leave this plug-in running at all times. 6.3.18. Content Synchronization plug-in Plug-in Parameter Description Plug-in ID content-sync-plugin DN of Configuration Entry cn=Content Synchronization,cn=plugins,cn=config Description Enables support for the SyncRepl protocol in Directory Server according to RFC 4533 . Type object Configurable Options on | off Default Setting off Configurable Arguments None Dependencies Retro Changelog plug-in Performance-Related Information If you know which back end or subtree clients access to synchronize data, limit the scope of the Retro Changelog plug-in accordingly. 6.3.19. Country String Syntax plug-in Plug-in Parameter Description Plug-in ID countrystring-syntax DN of Configuration Entry cn=Country String Syntax,cn=plugins,cn=config Description Supports country naming syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.20. Delivery Method Syntax plug-in Plug-in Parameter Description Plug-in ID delivery-syntax DN of Configuration Entry cn=Delivery Method Syntax,cn=plugins,cn=config Description Supports values that are lists of preferred deliver methods and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.21. deref plug-in Plug-in Parameter Description Plug-in ID Dereference DN of Configuration Entry cn=deref,cn=plugins,cn=config Description For dereference controls in directory searches Type preoperation Configurable Options on | off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.22. Distinguished Name Syntax plug-in Plug-in Parameter Description Plug-in ID dn-syntax DN of Configuration Entry cn=Distinguished Name Syntax,cn=plugins,cn=config Description Supports DN value syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.23. Distributed Numeric Assignment plug-in The Distributed Numeric Assignment Plug-in manages ranges of numbers and assigns unique numbers within that range to entries. By breaking number assignments into ranges, the Distributed Numeric Assignment Plug-in allows multiple servers to assign numbers without conflict. The plug-in also manages the ranges assigned to servers, so that if one instance runs through its range quickly, it can request additional ranges from the other servers. Distributed numeric assignment can be configured to work with single attribute types or multiple attribute types, and is only applied to specific suffixes and specific entries within the subtree. Distributed numeric assignment is handled per-attribute and is only applied to specific suffixes and specific entries within the subtree. Plug-in Information Description Plug-in ID Distributed Numeric Assignment Configuration Entry DN cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Description Distributed Numeric Assignment plugin Type preoperation Configurable Options on | off Default Setting off Configurable Arguments Dependencies Database Performance-Related Information None 6.3.23.1. dnaFilter This attribute sets an LDAP filter to use to search for and identify the entries to which to apply the distributed numeric assignment range. The dnaFilter attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any valid LDAP filter Default Value None Syntax DirectoryString Example dnaFilter: (objectclass=person) 6.3.23.2. dnaHostname This attribute identifies the host name of a server in a shared range, as part of the DNA range configuration for that specific host in multi-supplier replication. Available ranges are tracked by host and the range information is replicated among all suppliers so that if any supplier runs low on available numbers, it can use the host information to contact another supplier and request an new range. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Range Any valid host name Default Value None Example dnahostname: ldap1.example.com 6.3.23.3. dnaInterval This attribute sets an interval to use to increment through numbers in a range. Essentially, this skips numbers at a predefined rate. If the interval is 3 and the first number in the range is 1 , the number used in the range is 4 , then 7 , then 10 , incrementing by three for every new number assignment. In a replication environment, the dnaInterval enables multiple servers to share the same range. However, when you configure different servers that share the same range, set the dnaInterval and dnaNextVal parameters accordingly so that the different servers do not generate the same values. You must also consider this if you add new servers to the replication topology. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any integer Default Value 1 Syntax Integer Example dnaInterval: 1 6.3.23.4. dnaMagicRegen This attribute sets a user-defined value that instructs the plug-in to assign a new value for the entry. The magic value can be used to assign new unique numbers to existing entries or as a standard setting when adding new entries. The magic entry should be outside of the defined range for the server so that it cannot be triggered by accident. Note that this attribute does not have to be a number when used on a DirectoryString or other character type. However, in most cases the DNA plug-in is used on attributes which only accept integer values, and in such cases the dnamagicregen value must also be an integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Syntax DirectoryString Example dnaMagicRegen: -1 6.3.23.5. dnaMaxValue This attribute sets the maximum value that can be assigned for the range. The default is -1 , which is the same as setting the highest 64-bit integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems; -1 is unlimited Default Value -1 Syntax Integer Example dnaMaxValue: 1000 6.3.23.6. dnaNextRange This attribute defines the range to use when the current range is exhausted. This value is automatically set when range is transferred between servers, but it can also be manually set to add a range to a server if range requests are not used. The dnaNextRange attribute should be set explicitly only if a separate, specific range has to be assigned to other servers. Any range set in the dnaNextRange attribute must be unique from the available range for the other servers to avoid duplication. If there is no request from the other servers and the server where dnaNextRange is set explicitly has reached its set dnaMaxValue , the set of values (part of the dnaNextRange ) is allocated from this deck. The dnaNextRange allocation is also limited by the dnaThreshold attribute that is set in the DNA configuration. Any range allocated to another server for dnaNextRange cannot violate the threshold for the server, even if the range is available on the deck of dnaNextRange . Note If the dnaNextRange attribute is handled internally if it is not set explicitly. When it is handled automatically, the dnaMaxValue attribute serves as upper limit for the range. The attribute sets the range in the format lower_range-upper_range . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems for the lower and upper ranges Default Value None Syntax DirectoryString Example dnaNextRange: 100-500 6.3.23.7. dnaNextValue This attribute gives the available number which can be assigned. After being initially set in the configuration entry, this attribute is managed by the Distributed Numeric Assignment Plug-in. The dnaNextValue attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value -1 Syntax Integer Example dnaNextValue: 1 6.3.23.8. dnaPluginConfig (object class) This object class is used for entries which configure the DNA plug-in and numeric ranges to assign to entries. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.324 Allowed Attributes dnaType dnaPrefix dnaNextValue dnaMaxValue dnaInterval dnaMagicRegen dnaFilter dnaScope dnaSharedCfgDN dnaThreshold dnaNextRange dnaRangeRequestTimeout cn 6.3.23.9. dnaPortNum This attribute gives the standard port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax Integer Valid Range 0 to 65535 Default Value 389 Example dnaPortNum: 389 6.3.23.10. dnaPrefix This attribute defines a prefix that can be prepended to the generated number values for the attribute. For example, to generate a user ID such as user1000 , the dnaPrefix setting would be user . dnaPrefix can hold any kind of string. However, some possible values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Example dnaPrefix: id 6.3.23.11. dnaRangeRequestTimeout One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign. The dnaThreshold attribute sets a threshold of available numbers in the range, so that the server can request an additional range from the other servers before it is unable to perform number assignments. The dnaRangeRequestTimeout attribute sets a timeout period, in seconds, for range requests so that the server does not stall waiting on a new range from one server and can request a range from a new server. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 10 Syntax Integer Example dnaRangeRequestTimeout: 15 6.3.23.12. dnaRemainingValues This attribute contains the number of values that are remaining and available to a server to assign to entries. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range Any integer Default Value None Example dnaRemainingValues: 1000 6.3.23.13. dnaRemoteBindCred Specifies the Replication Manager's password. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. Set the parameter in plain text. The value is automatically AES-encrypted before it is stored. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString {AES} encrypted_password Valid Values Any valid AES-encrypted password. Default Value Example dnaRemoteBindCred: {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVGRERBNEJDUmxObUk0WXpjM1l5MHdaVE5rTXpZNA0KTnkxaE9XSmhORGRoT0MwMk1ESmpNV014TUFBQ0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCQk5KbUFDUWFOMHlITWdsUVp3QjBJOQ==}bBR3On6cBmw0DdhcRx826g== 6.3.23.14. dnaRemoteBindDN Specifies the Replication Manager DN. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Values Any valid Replication Manager DN. Default Value Example dnaRemoteBindDN: cn=replication manager,cn=config 6.3.23.15. dnaRemoteBindMethod Specifies the remote bind method. If you set a bind method in this attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values SIMPLE | SSL | SASL/GSSAPI | SASL/DIGEST-MD5 Default Value Example dnaRemoteBindMethod: SIMPLE 6.3.23.16. dnaRemoteConnProtocol Specifies the remote connection protocol. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values LDAP , SSL , or TLS Default Value Example dnaRemoteConnProtocol: LDAP 6.3.23.17. dnaScope This attribute sets the base DN to search for entries to which to apply the distributed numeric assignment. This is analogous to the base DN in an ldapsearch . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry Default Value None Syntax DirectoryString Example dnaScope: ou=people,dc=example,dc=com 6.3.23.18. dnaSecurePortNum This attribute gives the secure (TLS) port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range 0 to 65535 Default Value 636 Example dnaSecurePortNum: 636 6.3.23.19. dnaSharedCfgDN This attribute defines a shared identity that the servers can use to transfer ranges to one another. This entry is replicated between servers and is managed by the plug-in to let the other servers know what ranges are available. This attribute must be set for range transfers to be enabled. Note The shared configuration entry must be configured in the replicated subtree, so that the entry can be replicated to the servers. For example, if the ou=People,dc=example,dc=com subtree is replicated, then the configuration entry must be in that subtree, such as ou=UID Number Ranges , ou=People,dc=example,dc=com . The entry identified by this setting must be manually created by the administrator. The server will automatically contain a sub-entry beneath it to transfer ranges. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example dnaSharedCfgDN: cn=range transfer user,cn=config 6.3.23.20. dnaSharedConfig (object class) This object class is used to configure the shared configuration entry that is replicated between suppliers that are all using the same DNA plug-in configuration for numeric assignements. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.325 Allowed Attributes dnaHostname dnaPortNum dnaSecurePortNum dnaRemainingValues 6.3.23.21. dnaThreshold One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign, which can cause problems. The Distributed Numeric Assignment Plug-in allows the server to request a new range from the available ranges on other servers. So that the server can recognize when it is reaching the end of its assigned range, the dnaThreshold attribute sets a threshold of remaining available numbers in the range. When the server hits the threshold, it sends a request for a new range. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 100 Syntax Integer Example dnaThreshold: 100 6.3.23.22. dnaType This attribute sets which attributes have unique numbers being generated for them. In this case, whenever the attribute is added to the entry with the magic number, an assigned value is automatically supplied. This attribute is required to set a distributed numeric assignment for an attribute. If the dnaPrefix attribute is set, then the prefix value is prepended to whatever value is generated by dnaType . The dnaPrefix value can be any kind of string, but some reasonable values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Example dnaType: uidNumber 6.3.24. Enhanced Guide Syntax plug-in Plug-in Parameter Description Plug-in ID enhancedguide-syntax DN of Configuration Entry cn=Enhanced Guide Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for creating complex criteria, based on attributes and filters, to build searches; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.25. Facsimile Telephone Number Syntax plug-in Plug-in Parameter Description Plug-in ID facsimile-syntax DN of Configuration Entry cn=Facsimile Telephone Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for fax numbers; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.26. Fax Syntax plug-in Plug-in Parameter Description Plug-in ID fax-syntax DN of Configuration Entry cn=Fax Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for storing images of faxed objects; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.27. Generalized Time Syntax plug-in Plug-in Parameter Description Plug-in ID time-syntax DN of Configuration Entry cn=Generalized Time Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for dealing with dates, times and time zones; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information The Generalized Time String consists of a four digit year, two digit month (for example, 01 for January), two digit day, two digit hour, two digit minute, two digit second, an optional decimal part of a second, and a time zone indication. Red Hat strongly recommends using the Z time zone indication, which indicates Greenwich Mean Time. See also RFC 4517 . 6.3.28. Guide Syntax plug-in Warning This syntax is deprecated. Use Enhanced Guide syntax instead. Plug-in Parameter Description Plug-in ID guide-syntax DN of Configuration Entry cn=Guide Syntax,cn=plugins,cn=config Description Syntax for creating complex criteria, based on attributes and filters, to build searches Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information This syntax is obsolete. The Enhanced Guide Syntax should be used instead. 6.3.29. HTTP Client plug-in Plug-in Parameter Description Plug-in ID http-client DN of Configuration Entry cn=HTTP Client,cn=plugins,cn=config Description HTTP client plug-in Type preoperation Configurable Options on | off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information 6.3.30. Integer Syntax plug-in Plug-in Parameter Description Plug-in ID int-syntax DN of Configuration Entry cn=Integer Syntax,cn=plugins,cn=config Description Supports integer syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.31. Internationalization plug-in Plug-in Parameter Description Plug-in ID orderingrule DN of Configuration Entry cn=Internationalization Plugin,cn=plugins,cn=config Description Enables internationalized strings to be ordered in the directory Type matchingrule Configurable Options on | off Default Setting on Configurable Arguments The Internationalization Plug-in has one argument, which must not be modified, which specifies the location of the /etc/dirsrv/config/slapd-collations.conf file. This file stores the collation orders and locales used by the Internationalization Plug-in. Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.32. JPEG Syntax plug-in Plug-in Parameter Description Plug-in ID jpeg-syntax DN of Configuration Entry cn=JPEG Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for JPEG image data; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.33. ldbm database plug-in Plug-in Parameter Description Plug-in ID ldbm-backend DN of Configuration Entry cn=ldbm database,cn=plugins,cn=config Description Implements local databases Type database Configurable Options Default Setting on Configurable Arguments None Dependencies * Syntax * matchingRule Performance-Related Information See Section 6.4, "Database plug-in attributes" for further information on database configuration. 6.3.34. Linked Attributes plug-in Many times, entries have inherent relationships to each other (such as managers and employees, document entries and their authors, or special groups and group members). While attributes exist that reflect these relationships, these attributes have to be added and updated on each entry manually. That can lead to a whimsically inconsistent set of directory data, where these entry relationships are unclear, outdated, or missing. The Linked Attributes Plug-in allows one attribute, set in one entry, to update another attribute in another entry automatically. The first attribute has a DN value, which points to the entry to update; the second entry attribute also has a DN value which is a back-pointer to the first entry. The link attribute which is set by users and the dynamically-updated "managed" attribute in the affected entries are both defined by administrators in the Linked Attributes Plug-in instance. Conceptually, this is similar to the way that the MemberOf Plug-in uses the member attribute in group entries to set memberOf attribute in user entries. Only with the Linked Attributes Plug-in, all of the link/managed attributes are user-defined and there can be multiple instances of the plug-in, each reflecting different link-managed relationships. There are a couple of caveats for linking attributes: Both the link attribute and the managed attribute must have DNs as values. The DN in the link attribute points to the entry to add the managed attribute to. The managed attribute contains the linked entry DN as its value. The managed attribute must be multi-valued. Otherwise, if multiple link attributes point to the same managed entry, the managed attribute value would not be updated accurately. Plug-in Parameter Description Plug-in ID Linked Attributes DN of Configuration Entry cn=Linked Attributes,cn=plugins,cn=config Description Container entry for linked-managed attribute configuration entries. Each configuration entry under the container links one attribute to another, so that when one entry is updated (such as a manager entry), then any entry associated with that entry (such as a custom directReports attribute) are automatically updated with a user-specified corresponding attribute. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has three possible attributes: * linkType, which sets the primary attribute for the plug-in to monitor * managedType, which sets the attribute which will be managed dynamically by the plug-in whenever the attribute in linkType is modified * linkScope, which restricts the plug-in activity to a specific subtree within the directory tree Dependencies Database Performance-Related Information Any attribute set in linkType must only allow values in a DN format. Any attribute set in managedType must be multi-valued. 6.3.34.1. linkScope This restricts the scope of the plug-in, so it operates only in a specific subtree or suffix. If no scope is given, then the plug-in will update any part of the directory tree. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example linkScope: ou=People,dc=example,dc=com 6.3.34.2. linkType This sets the user-managed attribute. This attribute is modified and maintained by users, and then when this attribute value changes, the linked attribute is automatically updated in the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DirectoryString Example linkType: directReport 6.3.34.3. managedType This sets the managed, or plug-in maintained, attribute. This attribute is managed dynamically by the Linked Attributes Plug-in instance. Whenever a change is made to the managed attribute, then the plug-in updates all of the linked attributes on the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DN Example managedType: manager 6.3.35. Managed Entries plug-in In some unique circumstances, it is useful to have an entry created automatically when another entry is created. For example, this can be part of Posix integration by creating a specific group entry when a new user is created. Each instance of the Managed Entries Plug-in identifies two areas: The scope of the plug-in, meaning the subtree and the search filter to use to identify entries which require a corresponding managed entry A template entry that defines what the managed entry should look like Plug-in Information Description Plug-in ID Managed Entries Configuration Entry DN cn=Managed Entries,cn=plugins,cn=config Description Container entry for automatically generated directory entries. Each configuration entry defines a target subtree and a template entry. When a matching entry in the target subtree is created, then the plug-in automatically creates a new, related entry based on the template. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has four possible attributes: * originScope, which sets the search base * originFilter, which sets the search base for matching entries * managedScope, which sets the subtree under which to create new managed entries * managedTemplate, which is the template entry used to create the managed entries Dependencies Database Performance-Related Information None 6.3.35.1. managedBase This attribute sets the subtree under which to create the managed entries. This can be any entry in the directory tree. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example managedBase: ou=groups,dc=example,dc=com 6.3.35.2. managedTemplate This attribute identifies the template entry to use to create the managed entry. This entry can be located anywhere in the directory tree; however, it is recommended that this entry is in a replicated suffix so that all suppliers and consumers in replication are using the same template. The attributes used to create the managed entry template are described in the Red Hat Directory Server Configuration, Command, and File Reference . Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server entry of the mepTemplateEntry object class Default Value None Syntax DirectoryString Example managedTemplate: cn=My Template,ou=Templates,dc=example,dc=com 6.3.35.3. originFilter This attribute sets the search filter to use to search for and identify the entries within the subtree which require a managed entry. The filter allows the managed entries behavior to be limited to a specific type of entry or subset of entries. The syntax is the same as a regular search filter. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value None Syntax DirectoryString Example originFilter: objectclass=posixAccount 6.3.35.4. originScope This attribute sets the scope of the search to use to see which entries the plug-in monitors. If a new entry is created within the scope subtree, then the Managed Entries Plug-in creates a new managed entry that corresponds to it. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example originScope: ou=people,dc=example,dc=com 6.3.36. MemberOf plug-in Group membership is defined within group entries using attributes such as member . Searching for the member attribute makes it easy to list all of the members for the group. However, group membership is not reflected in the member's user entry, so it is impossible to tell to what groups a person belongs by looking at the user's entry. The MemberOf Plug-in synchronizes the group membership in group members with the members' individual directory entries by identifying changes to a specific member attribute (such as member ) in the group entry and then working back to write the membership changes over to a specific attribute in the members' user entries. Plug-in Information Description Plug-in ID memberOf Configuration Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Description Manages the memberOf attribute on user entries, based on the member attributes in the group entry. Type postoperation Configurable Options on | off Default Setting off Configurable Arguments * memberOfAttr sets the attribute to generate in people's entries to show their group membership. * memberOfGroupAttr sets the attribute to use to identify group member's DNs. Dependencies Database Performance-Related Information None 6.3.36.1. cn Sets the name of the plug-in instance. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any valid string Default Value Syntax DirectoryString Example cn: Example MemberOf Plugin Instance 6.3.36.2. memberOfAllBackends This attribute specifies whether to search the local suffix for user entries or all available suffixes. This can be desirable in directory trees where users may be distributed across multiple databases so that group membership is evaluated comprehensively and consistently. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example memberOfAllBackends: on 6.3.36.3. memberOfAttr This attribute specifies the attribute in the user entry for Directory Server to manage to reflect group membership. The MemberOf Plug-in generates the value of the attribute specified here in the directory entry for the member. There is a separate attribute for every group to which the user belongs. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute that supports DN syntax Default Value memberOf Syntax DirectoryString Example memberOfAttr: memberOf 6.3.36.4. memberOfAutoAddOC To enable the memberOf plug-in to add the memberOf attribute to a user, the user object must contain an object class that allows this attribute. If an entry does not have an object class that allows the memberOf attribute then the memberOf plugin will automatically add the object class listed in the memberOfAutoAddOC parameter. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any Directory Server object class Default Value nsMemberOf Syntax DirectoryString Example memberOfAutoAddOC: nsMemberOf 6.3.36.5. memberOfEntryScope If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScope parameter enables you to set what suffixes the MemberOf plug-in works on. If the parameter is not set, the plug-in works on all suffixes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScope: ou=people,dc=example,dc=com 6.3.36.6. memberOfEntryScopeExcludeSubtree If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScopeExcludeSubtree parameter enables you to set what suffixes the MemberOf plug-in excludes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . If the scopes set in both parameters overlap, the MemberOf plug-in only works on the non-overlapping directory entries. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScopeExcludeSubtree: ou=sample,dc=example,dc=com 6.3.36.7. memberOfGroupAttr This attribute specifies the attribute in the group entry to use to identify the DNs of group members. By default, this is the member attribute, but it can be any membership-related attribute that contains a DN value, such as uniquemember or member . Note Any attribute can be used for the memberOfGroupAttr value, but the MemberOf Plug-in only works if the value of the target attribute contains the DN of the member entry. For example, the member attribute contains the DN of the member's user entry: member: uid=jsmith,ou=People,dc=example,dc=com Some member-related attributes do not contain a DN, like the memberURL attribute. That attribute will not work as a value for memberOfGroupAttr . The memberURL value is a URL, and a non-DN value cannot work with the MemberOf Plug-in. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value member Syntax DirectoryString Example memberOfGroupAttr: member 6.3.36.8. memberOfSkipNested If you do not use nested groups in the directory, set the memberOfSkipNested attribute to on to skip the nested group check. It significantly improves response time of update operations when Directory Server needs to compute membership in more that 10000 entries. You do not need to restart the server to apply changes. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid range on | off Default value off Syntax DirectoryString Example memberOfSkipNested: off 6.3.37. Multi-supplier Replication plug-in Plug-in Parameter Description Plug-in ID replication-multisupplier DN of Configuration Entry cn=Multisupplier Replication Plugin,cn=plugins,cn=config Description Enables replication between two current Directory Server Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies * Named: ldbm database * Named: DES * Named: Class of Service Performance-Related Information Further Information Turn this plug-in off if one server will never replicate. 6.3.38. Name and Optional UID Syntax plug-in Plug-in Parameter Description Plug-in ID nameoptuid-syntax DN of Configuration Entry cn=Name And Optional UID Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules to store and search for a DN with an optional unique ID; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information The optional UID is used to distinguish between entries which may have identical DNs or naming attributes. See also RFC 4517 . 6.3.39. Numeric String Syntax plug-in Plug-in Parameter Description Plug-in ID numstr-syntax DN of Configuration Entry cn=Numeric String Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for strings of numbers and spaces; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.40. Octet String Syntax plug-in Note Use the Octet String syntax instead of Binary, which is deprecated. Plug-in Parameter Description Plug-in ID octetstring-syntax DN of Configuration Entry cn=Octet String Syntax,cn=plugins,cn=config Description Supports octet string syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.41. OID Syntax plug-in Plug-in Parameter Description Plug-in ID oid-syntax DN of Configuration Entry cn=OID Syntax,cn=plugins,cn=config Description Supports object identifier (OID) syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.42. PAM Pass Through Auth plug-in Local PAM configurations on Unix systems can leverage an external authentication store for LDAP users. This is a form of pass-through authentication which allows Directory Server to use the externally-stored user credentials for directory access. PAM pass-through authentication is configured in child entries beneath the PAM Pass Through Auth Plug-in container entry. All of the possible configuration attributes for PAM authentication (defined in the 60pam-plugin.ldif schema file) are available to a child entry; the child entry must be an instance of the PAM configuration object class. Example 6.4. Example PAM Pass Through Auth Configuration Entries dn: cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: PAM Pass Through Auth nsslapd-pluginPath: libpam-passthru-plugin nsslapd-pluginInitfunc: pam_passthruauth_init nsslapd-pluginType: preoperation pass:quotes[ nsslapd-pluginEnabled: on ] nsslapd-pluginLoadGlobal: true nsslapd-plugin-depends-on-type: database nsslapd-pluginId: pam_passthruauth nsslapd-pluginVersion: 9.0.0 nsslapd-pluginVendor: Red Hat nsslapd-pluginDescription: PAM pass through authentication plugin dn: cn=Example PAM Config,cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: Example PAM Config pamMissingSuffix: ALLOW pass:quotes[ pamExcludeSuffix: cn=config ] pass:quotes[ pamIDMapMethod: RDN ou=people,dc=example,dc=com ] pass:quotes[ pamIDMapMethod: ENTRY ou=engineering,dc=example,dc=com ] pass:quotes[ pamIDAttr: customPamUid ] pass:quotes[ pamFilter: (manager=uid=bjensen,ou=people,dc=example,dc=com) ] pamFallback: FALSE pass:quotes[ pamSecure: TRUE ] pass:quotes[ pamService: ldapserver ] The PAM configuration, at a minimum, must define a mapping method (a way to identify what the PAM user ID is from the Directory Server entry), the PAM server to use, and whether to use a secure connection to the service. pamIDMapMethod: RDN pamSecure: FALSE pamService: ldapserver The configuration can be expanded for special settings, such as to exclude or specifically include subtrees or to map a specific attribute value to the PAM user ID. Plug-in Parameter Description Plug-in ID pam_passthruauth DN of Configuration Entry cn=PAM Pass Through Auth,cn=plugins,cn=config Description Enables pass-through authentication for PAM, meaning that a PAM service can use the Directory Server as its user authentication store. Type preoperation Configurable Options on | off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information 6.3.42.1. pamConfig (object class) This object class is used to define the PAM configuration to interact with the directory service. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.318 Allowed Attributes pamExcludeSuffix pamIncludeSuffix pamMissingSuffix pamFilter pamIDAttr pamIDMapMethod pamFallback pamSecure pamService nsslapd-pluginConfigArea 6.3.42.2. pamExcludeSuffix This attribute specifies a suffix to exclude from PAM authentication. OID 2.16.840.1.113730.3.1.2068 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 6.3.42.3. pamFallback Sets whether to fallback to regular LDAP authentication if PAM authentication fails. OID 2.16.840.1.113730.3.1.2072 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.4. pamFilter Sets an LDAP filter to use to identify specific entries within the included suffixes for which to use PAM pass-through authentication. If not set, all entries within the suffix are targeted by the configuration entry. OID 2.16.840.1.113730.3.1.2131 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.5. pamIDAttr This attribute contains the attribute name which is used to hold the PAM user ID. OID 2.16.840.1.113730.3.1.2071 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 6.3.42.6. pamIDMapMethod Gives the method to use to map the LDAP bind DN to a PAM identity. Note Directory Server user account inactivation is only validated using the ENTRY mapping method. With RDN or DN, a Directory Server user whose account is inactivated can still bind to the server successfully. OID 2.16.840.1.113730.3.1.2070 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.7. pamIncludeSuffix This attribute sets a suffix to include for PAM authentication. OID 2.16.840.1.113730.3.1.2067 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 6.3.42.8. pamMissingSuffix Identifies how to handle missing include or exclude suffixes. The options are ERROR (which causes the bind operation to fail); ALLOW, which logs an error but allows the operation to proceed; and IGNORE, which allows the operation and does not log any errors. OID 2.16.840.1.113730.3.1.2069 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.9. pamModuleIsThreadSafe By default, Directory Server serializes the Pluggable Authentication Module (PAM) authentications. If you set the pamModuleIsThreadSafe attribute to on , Directory Server starts to perform PAM authentications in parallel. However, ensure that the PAM module you are using is a thread-safe module. Currently, you can use the ldapmodify utility to configure the pamModuleIsThreadSafe attribute: To apply changes, restart the server. OID 2.16.840.1.113730.3.1.2399 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.10. pamSecure Requires secure TLS connection for PAM authentication. OID 2.16.840.1.113730.3.1.2073 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.42.11. pamService Contains the service name to pass to PAM. This assumes that the service specified has a configuration file in the /etc/pam.d/ directory. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM pam_fprintd.so module causes Directory Server to hit the max file descriptor limit and can cause Directory Server process to abort. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM fprintd module causes Directory Server to hit the max file descriptor limit and can cause the Directory Server process to abort. OID 2.16.840.1.113730.3.1.2074 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Directory Server 6.3.43. Pass Through Authentication plug-in Plug-in Parameter Description Plug-in ID passthruauth DN of Configuration Entry cn=Pass Through Authentication,cn=plugins,cn=config Description Enables pass-through authentication , the mechanism which allows one directory to consult another to authenticate bind requests. Type preoperation Configurable Options on | off Default Setting off Configurable Arguments ldap://example.com:389/o=example Dependencies Database Performance-Related Information Pass-through authentication slows down bind requests a little because they have to make an extra hop to the remote server. 6.3.44. Password Storage Schemes Directory Server implements the password storage schemes as plug-ins. However, the cn=Password Storage Schemes,cn=plugins,cn=config entry itself is just a container, not a plug-in entry. All password storage scheme plug-ins are stored as a subentry of this container. To display all password storage schemes plug-ins, enter: # ldapsearch -D "cn=Directory Manager" -W -H ldap://server.example.com -x \ -b "cn=Password Storage Schemes,cn=plugins,cn=config" -s sub "(objectclass=*)" dn Warning Red Hat recommends not disabling the password scheme plug-ins nor to change the configurations of the plug-ins to prevent unpredictable authentication behavior. Strong Password Storage Schemes Red Hat recommends using only the following strong password storage schemes (strongest first): PBKDF2-SHA512 (default). The PBKDF2-SHA512 is more secure than PBKDF2_SHA256 . The password-based key derivation function 2 (PBKDF2) is designed to expend resources to counter brute force attacks. PBKDF2 supports a variable number of iterations to apply the hashing algorithm. Higher iterations improve security but require more hardware resources. To apply the PBKDF2-SHA512 algorithm, Directory Server uses 10,000 iterations. Note The network security service (NSS) database in Red Hat Enterprise Linux 6 does not support PBKDF2. Therefore you cannot use this password scheme in a replication topology with Directory Server 9. SSHA512 The salted secure hashing algorithm (SSHA) implements an enhanced version of the secure hashing algorithm (SHA), that uses a randomly generated salt to increase the security of the hashed password. SSHA512 implements the hashing algorithm using 512 bits. Weak Password Storage Schemes Besides the recommended strong password storage schemes, Directory Server supports the following weak schemes for backward compatibility: AES CLEAR CRYPT CRYPT-MD5 CRYPT-SHA256 CRYPT-SHA512 DES MD5 NS-MTA-MD5 [a] SHA [b] SHA256 SHA384 SHA512 SMD5 SSHA SSHA256 SSHA384 [a] Directory Server only supports authentication using this scheme. You can no longer use it to encrypt passwords. [b] 160 bit Important Only continue using a weak scheme over a short time frame, as it increases security risks. 6.3.45. Posix Winsync API plug-in By default, Posix-related attributes are not synchronized between Active Directory and Red Hat Directory Server. On Linux systems, system users and groups are identified as Posix entries, and LDAP Posix attributes contain that required information. However, when Windows users are synced over, they have ntUser and ntGroup attributes automatically added which identify them as Windows accounts, but no Posix attributes are synced over (even if they exist on the Active Directory entry) and no Posix attributes are added on the Directory Server side. The Posix Winsync API Plug-in synchronizes POSIX attributes between Active Directory and Directory Server entries. Note All POSIX attributes (such as uidNumber , gidNumber , and homeDirectory ) are synchronized between Active Directory and Directory Server entries. However, if a new POSIX entry or POSIX attributes are added to an existing entry in Directory Server, only the POSIX attributes are synchronized over to the Active Directory corresponding entry . The POSIX object class ( posixAccount for users and posixGroup for groups) is not added to the Active Directory entry. This plug-in is disabled by default and must be enabled before any Posix attributes will be synchronized from the Active Directory entry to the Directory Server entry. Plug-in Parameter Description Plug-in ID posix-winsync-plugin DN of Configuration Entry cn=Posix Winsync API,cn=plugins,cn=config Description Enables and configures Windows synchronization for Posix attributes set on Active Directory user and group entries. Type preoperation Configurable Arguments * on | off * memberUID mapping (groups) * converting and sorting memberUID values in lower case (groups) * memberOf fix-up tasks with sync operations * use Windows 2003 Posix schema Default Setting off Configurable Arguments None Dependencies database 6.3.45.1. posixWinsyncCreateMemberOfTask This attribute sets whether to run the memberOf fix-up task immediately after a sync run in order to update group memberships for synced users. This is disabled by default because the memberOf fix-up task can be resource-intensive and cause performance issues if it is run too frequently. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncCreateMemberOfTask: false 6.3.45.2. posixWinsyncLowerCaseUID This attribute sets whether to store (and, if necessary, convert) the UID value in the memberUID attribute in lower case. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncLowerCaseUID: false 6.3.45.3. posixWinsyncMapMemberUID This attribute sets whether to map the memberUID attribute in an Active Directory group to the uniqueMember attribute in a Directory Server group. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value true Example posixWinsyncMapMemberUID: false 6.3.45.4. posixWinsyncMapNestedGrouping The posixWinsyncMapNestedGrouping parameter manages if nested groups are updated when memberUID attributes in an Active Directory POSIX group change. Updating nested groups is supported up a depth of five levels. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMapNestedGrouping: false 6.3.45.5. posixWinsyncMsSFUSchema This attribute sets whether to the older Microsoft System Services for Unix 3.0 (msSFU30) schema when syncing Posix attributes from Active Directory. By default, the Posix Winsync API Plug-in uses Posix schema for modern Active Directory servers: 2005, 2008, and later versions. There are slight differences between the modern Active Directory Posix schema and the Posix schema used by Windows Server 2003 and older Windows servers. If an Active Directory domain is using the older-style schema, then the older-style schema can be used instead. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMsSFUSchema: true 6.3.46. Postal Address String Syntax plug-in Plug-in Parameter Description Plug-in ID postaladdress-syntax DN of Configuration Entry cn=Postal Address Syntax,cn=plugins,cn=config Description Supports postal address syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.47. Printable String Syntax plug-in Plug-in Parameter Description Plug-in ID printablestring-syntax DN of Configuration Entry cn=Printable String Syntax,cn=plugins,cn=config Description Supports syntaxes and matching rules for alphanumeric and select punctuation strings (for strings which conform to printable strings as defined in RFC 4517 ). Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.48. Referential Integrity plug-in Plug-in Parameter Description Plug-in ID referint DN of Configuration Entry cn=Referential Integrity Postoperation,cn=plugins,cn=config Description Enables the server to ensure referential integrity Type postoperation Configurable Options All configuration and on | off Default Setting off Configurable Arguments When enabled, the post-operation Referential Integrity plug-in performs integrity updates on the member , uniquemember , owner , and seeAlso attributes immediately after a delete or rename operation. The plug-in can be configured to perform integrity checks on all other attributes. Dependencies Database Performance-Related Information The Referential Integrity plug-in should be enabled on all suppliers in multi-supplier replication environment. When enabling the plug-in on chained servers, be sure to analyze the performance resource and time needs as well as integrity needs; integrity checks can be time consuming and demanding on memory and CPU. All attributes specified must be indexed for both presence and equality. 6.3.49. Retro Changelog plug-in Two different types of changelogs are maintained by Directory Server. The first type, referred to as simply a changelog , is used by multi-supplier replication, and the second changelog, a plug-in referred to as the retro changelog , is intended for use by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. This Retro Changelog Plug-in is used to record modifications made to a supplier server. When the supplier server's directory is modified, an entry is written to the Retro Changelog that contains both of the following: A number that uniquely identifies the modification. This number is sequential with respect to other entries in the changelog. The modification action; that is, exactly how the directory was modified. It is through the Retro Changelog Plug-in that the changes performed to Directory Server are accessed using searches to cn=changelog suffix. Plug-in Parameter Description Plug-in ID retrocl DN of Configuration Entry cn=Retro Changelog Plugin,cn=plugins,cn=config Description Used by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. Maintains a log of all changes occurring in Directory Server. The retro changelog offers the same functionality as the changelog in the 4.x versions of Directory Server. This plug-in exposes the cn=changelog suffix to clients, so that clients can use this suffix with or without persistent search for simple sync applications. Type object Configurable Options on | off Default Setting off Configurable Arguments See Section 6.3.49, "Retro Changelog plug-in" for further information on the configuration attributes for this plug-in. Dependencies * Type: Database * Named: Class of Service Performance-Related Information May slow down Directory Server update performance. 6.3.49.1. isReplicated This optional attribute sets a flag to indicate on a change in the changelog whether the change is newly made on that server or whether it was replicated over from another server. Parameter Description OID 2.16.840.1.113730.3.1.2085 Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values true | false Default Value None Syntax Boolean Example isReplicated: true 6.3.49.2. nsslapd-attribute This attribute explicitly specifies another Directory Server attribute which must be included in the retro changelog entries. Many operational attributes and other types of attributes are commonly excluded from the retro changelog, but these attributes may need to be present for a third-party application to use the changelog data. This is done by listing the attribute in the retro changelog plug-in configuration using the nsslapd-attribute parameter. It is also possible to specify an optional alias for the specified attribute within the nsslapd-attribute value. nsslapd-attribute: attribute :pass:attributes[{blank}] alias Using an alias for the attribute can help avoid conflicts with other attributes in an external server or application which may use the retro changelog records. Note Setting the value of the nsslapd-attribute attribute to isReplicated is a way of indicating, in the retro changelog entry itself, whether the modification was done on the local server (that is, whether the change is an original change) or whether the change was replicated over to the server. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid directory attribute (standard or custom) Default Value None Syntax DirectoryString Example nsslapd-attribute: nsUniqueId: uniqueID 6.3.49.3. nsslapd-changelogdir This attribute specifies the name of the directory in which the changelog database is created the first time the plug-in is run. By default, the database is stored with all the other databases under /var/lib/dirsrv/slapd- instance /changelogdb . Note For performance reasons, store this database on a different physical disk. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid path to the directory Default Value None Syntax DirectoryString Example nsslapd-changelogdir: /var/lib/dirsrv/slapd- instance /changelogdb 6.3.49.4. nsslapd-changelogmaxage The nsslapd-changelogmaxage attribute sets the maximum age of any entry in the changelog. The changelog contains records of each directory modification and is used when synchronizing consumer servers. Each record contains a timestamp. Any record with a timestamp that is older than the value specified in this attribute is removed. By default, Directory Server removes records that are older than seven days. If you set this attribute to 0 , there is no age limit on changelog records, and Directory Server keeps all records. The size of the retro changelog is automatically reduced when you set a lower value. Note Expired changelog records will not be removed if there is an agreement that has fallen behind further than the maximum age. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Range 0 (meaning that entries are not removed according to their age) to the maximum 32 bit integer value (2147483647) Default Value 7d Syntax DirectoryString IntegerAgeID , where AgeID is: s ( S ) for seconds m ( M ) for minutes h ( H ) for hours d ( D ) for days w ( W ) for weeks If you set only the integer value without the AgeID then Directory Server takes it as seconds. Example nsslapd-changelogmaxage: 30d 6.3.49.5. nsslapd-exclude-attrs The nsslapd-exclude-attrs parameter stores an attribute name to exclude from the retro changelog database. To exclude multiple attributes, add one nsslapd-exclude-attrs parameter for each attribute to exclude. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-attrs: example 6.3.49.6. nsslapd-exclude-suffix The nsslapd-exclude-suffix parameter stores a suffix to exclude from the retro changelog database. You can add the parameter multiple times to exclude multiple suffixes. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-suffix: ou=demo,dc=example,dc=com 6.3.50. Roles plug-in Plug-in Parameter Description Plug-in ID roles DN of Configuration Entry cn=Roles Plugin,cn=plugins,cn=config Description Enables the use of roles in Directory Server Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.3.51. RootDN Access Control plug-in The root DN, cn=Directory Manager, is a special user entry that is defined outside the normal user database. Normal access control rules are not applied to the root DN, but because of the powerful nature of the root user, it can be beneficial to apply some kind of access control rules to the root user. The RootDN Access Control Plug-in sets normal access controls - host and IP address restrictions, time-of-day restrictions, and day of week restrictions - on the root user. This plug-in is disabled by default. Plug-in Parameter Description Plug-in ID rootdn-access-control DN of Configuration Entry cn=RootDN Access Control,cn=plugins,cn=config Description Enables and configures access controls to use for the root DN entry. Type internalpreoperation Configurable Options on | off Default Setting off Configurable Attributes * rootdn-open-time and rootdn-close-time for time-based access controls * rootdn-days-allowed for day-based access controls * rootdn-allow-host, rootdn-deny-host, rootdn-allow-ip, and rootdn-deny-ip for host-based access controls Dependencies None 6.3.51.1. rootdn-allow-host This sets what hosts, by fully-qualified domain name, the root user is allowed to use to access Directory Server. Any hosts not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple hosts, domains, or subdomains. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid host name or domain, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-host: *.example.com 6.3.51.2. rootdn-allow-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is allowed to use to access Directory Server. Any IP addresses not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-ip: 192.168. . 6.3.51.3. rootdn-close-time This sets part of a time period or range when the root user is allowed to access Directory Server. This sets when the time-based access ends , when the root user is no longer allowed to access Directory Server. This is used in conjunction with the rootdn-open-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-close-time: 1700 6.3.51.4. rootdn-days-allowed This gives a comma-separated list of what days the root user is allowed to use to access Directory Server. Any days listed are implicitly denied. This can be used with rootdn-close-time and rootdn-open-time to combine time-based access and days-of-week or it can be used by itself (with all hours allowed on allowed days). Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Values * Sun * Mon * Tue * Wed * Thu * Fri * Sat Default Value None Syntax DirectoryString Example rootdn-days-allowed: Mon, Tue, Wed, Thu, Fri 6.3.51.5. rootdn-deny-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is not allowed to use to access Directory Server. Any IP addresses not listed are implicitly allowed. Note Deny rules supercede allow rules, so if an IP address is listed in both the rootdn-allow-ip and rootdn-deny-ip attributes, it is denied access. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-deny-ip: 192.168.0.0 6.3.51.6. rootdn-open-time This sets part of a time period or range when the root user is allowed to access Directory Server. This sets when the time-based access begins . This is used in conjunction with the rootdn-close-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-open-time: 0800 6.3.52. Schema Reload plug-in Plug-in Information Description Plug-in ID schemareload Configuration Entry DN cn=Schema Reload,cn=plugins,cn=config Description Task plug-in to reload schema files Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information 6.3.53. Space Insensitive String Syntax plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Space Insensitive String Syntax,cn=plugins,cn=config Description Syntax for handling space-insensitive values Type syntax Configurable Options on | off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information This plug-in enables the Directory Server to support space and case insensitive values. This allows applications to search the directory using entries with ASCII space characters. For example, a search or compare operation that uses jOHN Doe will match entries that contain johndoe , john doe , and John Doe if the attribute's schema has been configured to use the space insensitive syntax. 6.3.54. State Change plug-in Plug-in Parameter Description Plug-in ID statechange DN of Configuration Entry cn=State Change Plugin,cn=plugins,cn=config Description Enables state-change-notification service Type postoperation Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information 6.3.55. Syntax Validation Task plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Syntax Validation Task,cn=plugins,cn=config Description Enables syntax validation for attribute values Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information This plug-in implements syntax validation tasks. The actual process that carries out syntax validation is performed by each specific syntax plug-in. 6.3.56. Telephone Syntax plug-in Plug-in Parameter Description Plug-in ID tele-syntax DN of Configuration Entry cn=Telephone Syntax,cn=plugins,cn=config Description Supports telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.57. Teletex Terminal Identifier Syntax plug-in Plug-in Parameter Description Plug-in ID teletextermid-syntax DN of Configuration Entry cn=Teletex Terminal Identifier Syntax,cn=plugins,cn=config Description Supports international telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.58. Telex Number Syntax plug-in Plug-in Parameter Description Plug-in ID telex-syntax DN of Configuration Entry cn=Telex Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for the telex number, country code, and answerback code of a telex terminal; from RFC 4517 . Type syntax Configurable Options on | off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.59. URI Syntax plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=URI Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for unique resource identifiers (URIs), including unique resource locators (URLs); from RFC 4517 . Type syntax Configurable Options on | off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. If enabled, Red Hat recommends leaving this plug-in running at all times. Further Information RFC 4517 6.3.60. USN plug-in Plug-in Parameter Description Plug-in ID USN DN of Configuration Entry cn=USN,cn=plugins,cn=config Description Sets an update sequence number (USN) on an entry, for every entry in the directory, whenever there is a modification, including adding and deleting entries and modifying attribute values. Type object Configurable Options on | off Default Setting off Configurable Arguments None Dependencies Database Performance-Related Information For replication, it is recommended that the entryUSN configuration attribute be excluded using fractional replication. 6.3.61. Views plug-in Plug-in Parameter Description Plug-in ID views DN of Configuration Entry cn=Views,cn=plugins,cn=config Description Enables the use of views in Directory Server databases. Type object Configurable Options on | off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. 6.4. Database plug-in attributes The database plug-in is also organized in an information tree. All plug-in technology used by the database instances is stored in the cn=ldbm database plug-in node. This section presents the additional attribute information for each of the nodes in bold in the cn=ldbm database,cn=plugins,cn=config information tree. 6.4.1. Database attributes under cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 6.4.1.1. nsslapd-backend-implement The nsslapd-backend-implement parameter defines the database back end that Directory Server uses. Directory Server supports the following database types: Berkeley Database (BDB) Lightning Memory-Mapped Database Manager (LMDB) Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values bdb or mdb Default Value bdb Syntax Directory String Example nsslapd-backend-implement: mdb 6.4.1.2. nsslapd-backend-opt-level This parameter can trigger experimental code to improve write performance. Possible values: 0 : Disables the parameter. 1 : The replication update vector is not written to the database during the transaction 2 : Changes the order of taking the back end lock and starts the transaction 4 : Moves code out of the transaction. All parameters can be combined. For example 7 enables all optimisation features. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 | 1 | 2 | 4 Default Value 0 Syntax Integer Example nsslapd-backend-opt-level: 0 6.4.1.3. nsslapd-db-deadlock-policy The nsslapd-db-deadlock-policy parameter sets the libdb library-internal deadlock policy. Important Only change this parameter if instructed by Red Hat Support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0-9 Default Value 0 Syntax DirectoryString Example nsslapd-db-deadlock-policy: 9 6.4.1.4. nsslapd-db-private-import-mem The nsslapd-db-private-import-mem parameter manages whether or not Directory Server uses private memory for allocation of regions and mutexes for a database import. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-private-import-mem: on 6.4.1.5. nsslapd-db-transaction-wait If you enable the nsslapd-db-transaction-wait parameter, Directory Server does not start the transaction and waits until lock resources are available. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-transaction-wait: off 6.4.1.6. nsslapd-directory This attribute specifies absolute path to database instance. If the database instance is manually created then this attribute must be included. Once the database instance is created, do not modify this path as any changes risk preventing the server from accessing data. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid absolute path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db 6.4.1.7. nsslapd-exclude-from-export This attribute contains a space-separated list of names of attributes to exclude from an entry when a database is exported. This mainly is used for some configuration and operational attributes which are specific to a server instance. Do not remove any of the default values for this attribute, since that may affect server performance. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid attribute Default Value entrydn entryid dncomp parentid numSubordinates entryusn Syntax DirectoryString Example nsslapd-exclude-from-export: entrydn entryid dncomp parentid numSubordinates entryusn 6.4.1.8. nsslapd-idlistscanlimit The nsslapd-idlistscanlimit attribute is deprecated because the impact of the attribute on search performance is more harmful than helpful. Further description is provided for historical purposes only. This performance-related attribute, present by default, specifies the number of entry IDs that are searched during a search operation. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message, with additional error information explaining the problem. It is advisable to keep the default value to improve search performance. This parameter can be changed while the server is running, and the new value will affect subsequent searches. The corresponding user-level attribute is nsIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 100 to the maximum 32-bit integer value (2147483647) entry IDs Default Value 2147483646 Syntax Integer Example nsslapd-idlistscanlimit: 50000 6.4.1.9. nsslapd-idl-switch The nsslapd-idl-switch parameter sets the IDL format Directory Server uses. Note that Red Hat no longer supports the old IDL format. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values new | old Default Value new Syntax Directory String Example nsslapd-idl-switch: new 6.4.1.10. nsslapd-lookthroughlimit This performance-related attribute specifies the maximum number of entries that Directory Server will check when examining candidate entries in response to a search request. The Directory Manager DN, however, is, by default, unlimited and overrides any other settings specified here. It is worth noting that binder-based resource limits work for this limit, which means that if a value for the operational attribute nsLookThroughLimit is present in the entry as which a user binds, the default limit will be overridden. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-lookthroughlimit: 5000 6.4.1.11. nsslapd-mode This attribute specifies the permissions used for newly created index files. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any four-digit octal number. However, mode 0600 is recommended. This allows read and write access for the owner of the index files (which is the user as whom the ns-slapd runs) and no access for other users. Default Value 600 Syntax Integer Example nsslapd-mode: 0600 6.4.1.12. nsslapd-pagedidlistscanlimit This performance-related attribute specifies the number of entry IDs that are searched, specifically, for a search operation using the simple paged results control. This attribute works the same as the nsslapd-idlistscanlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-idlistscanlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedidlistscanlimit: 5000 6.4.1.13. nsslapd-pagedlookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries for a search which uses the simple paged results control. This attribute works the same as the nsslapd-lookthroughlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-lookthroughlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedLookThroughLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedlookthroughlimit: 25000 6.4.1.14. nsslapd-rangelookthroughlimit This performance-related attribute specifies the maximum number of entries that Directory Server will check when examining candidate entries in response to a range search request. Range searches use operators to set a bracket to search for and return an entire subset of entries within the directory. For example, this searches for every entry modified at or after midnight on January 1: (modifyTimestamp>=20200101010101Z) The nature of a range search is that it must evaluate every single entry within the directory to see if it is within the range given. Essentially, a range search is always an all IDs search. For most users, the look-through limit kicks in and prevents range searches from turning into an all IDs search. This improves overall performance and speeds up range search results. However, some clients or administrative users like Directory Manager may not have a look-through limit set. In that case, a range search can take several minutes to complete or even continue indefinitely. The nsslapd-rangelookthroughlimit attribute sets a separate range look-through limit that applies to all users, including Directory Manager. This allows clients and administrative users to have high look-through limits while still allowing a reasonable limit to be set on potentially performance-impaired range searches. Note Unlike other resource limits, this applies to searches by any user, including Directory Manager, regular users, and other LDAP clients. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-rangelookthroughlimit: 5000 6.4.1.15. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 6.4.1.16. nsslapd-search-use-vlv-index The nsslapd-search-use-vlv-index enables and disables virtual list view (VLV) searches. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax Directory String Example nsslapd-search-use-vlv-index: on 6.4.2. Database attributes under cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 6.4.2.1. nsslapd-cache-autosize This performance tuning-related attribute sets the percentage of free memory that is used in total for the database and entry cache. For example, if the value is set to 10 , 10% of the system's free RAM is used for both caches. If this value is set to a value greater than 0 , auto-sizing is enabled for the database and entry cache. For optimized performance, Red Hat recommends not to disable auto-sizing. However, in certain situations in can be necessary to disable auto-sizing. In this case, set the nsslapd-cache-autosize attribute to 0 and manually set: the database cache in the nsslapd-dbcachesize attribute. the entry cache in the nsslapd-cachememsize attribute. Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40 Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100. If 0 is set, the default value is used instead. Default Value 10 Syntax Integer Example nsslapd-cache-autosize: 10 6.4.2.2. nsslapd-cache-autosize-split This performance tuning-related attribute sets the percentage of RAM that is used for the database cache. The remaining percentage is used for the entry cache. For example, if the value is set to 40 , the database cache uses 40%, and the entry cache the remaining 60% of the free RAM reserved in the nsslapd-cache-autosize attribute. Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40 Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 99. If 0 is set, the default value is used instead. Default Value 40 Syntax Integer Example nsslapd-cache-autosize-split: 40 6.4.2.3. nsslapd-dbcachesize This performance tuning-related attribute specifies the database index cache size, in bytes. This is one of the most important values for controlling how much physical RAM the directory server uses. This is not the entry cache. This is the amount of memory the Berkeley database back end will use to cache the indexes (the .db files) and other files. This value is passed to the Berkeley DB API function set_cachesize . If automatic cache resizing is activated, this attribute is overridden when the server replaces these values with its own guessed values at a later stage of the server startup. For more technical information on this attribute, see the cache size section of the Berkeley DB reference guide at link:https://docs.oracle.com/cd/E17076_04/html/programmer_reference/general_am_conf.html#am_conf_cachesize. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note Do not set the database cache size manually. Red Hat recommends to use the database cache auto-sizing feature for optimized performance. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 4 gigabytes for 32-bit platforms and 500 kilobytes to 2^64-1 for 64-bit platforms Default Value Syntax Integer Example nsslapd-dbcachesize: 10000000 6.4.2.4. nsslapd-db-checkpoint-interval This sets the amount of time in seconds after which Directory Server sends a checkpoint entry to the database transaction log. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. A checkpoint entry indicates which database operations have been physically written to the directory database. The checkpoint entries are used to determine where in the database transaction log to begin recovery after a system failure. The nsslapd-db-checkpoint-interval attribute is absent from dse.ldif . To change the checkpoint interval, add the attribute to dse.ldif . This attribute can be dynamically modified using ldapmodify . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause Directory Server to be unstable. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 10 to 300 seconds Default Value 60 Syntax Integer Example nsslapd-db-checkpoint-interval: 120 6.4.2.5. nsslapd-db-circular-logging This attribute specifies circular logging for the transaction log files. If this attribute is switched off, old transaction log files are not removed and are kept renamed as old log transaction files. Turning circular logging off can severely degrade server performance and, as such, should only be modified with the guidance of Red Hat Technical Support or Red Hat Consulting. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-circular-logging: on 6.4.2.6. nsslapd-db-debug This attribute specifies whether additional error information is to be reported to Directory Server. To report error information, set the parameter to on . This parameter is meant for troubleshooting; enabling the parameter may slow down Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-debug: off 6.4.2.7. nsslapd-db-durable-transactions This attribute sets whether database transaction log entries are immediately written to the disk. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. With durable transactions enabled, every directory change will always be physically recorded in the log file and, therefore, able to be recovered in the event of a system failure. However, the durable transactions feature may also slow the performance of Directory Server. When durable transactions is disabled, all transactions are logically written to the database transaction log but may not be physically written to disk immediately. If there were a system failure before a directory change was physically written to disk, that change would not be recoverable. The nsslapd-db-durable-transactions attribute is absent from dse.ldif . To disable durable transactions, add the attribute to dse.ldif . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause Directory Server to be unstable. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-durable-transactions: on 6.4.2.8. nsslapd-db-compactdb-interval The nsslapd-db-compactdb-interval attribute defines the interval in seconds when Directory Server compacts the databases and replication changelogs. The compact operation returns the unused pages to the file system and the database file size shrinks. Note that compacting the database is resource-intensive and should not be done too often. The attribute change does not require the server restart. However, Directory Server starts to count the new interval value from the time you changed the value. For example, the compaction is planned for today at 10:40. Then at 10:35, 5 minutes before the planned compaction, you set the new interval ( nsslapd-db-compactdb-interval ) to 259200 seconds (3 days) and the new compaction time ( nsslapd-db-compactdb-time ) to 20:30 . Now Directory Server discards the compaction planned for today at 10:40 and performs the compaction in 3 days at 20:30. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (no compaction) to 2147483647 second Default Value 2592000 (30 days) Syntax Integer Example nsslapd-db-compactdb-interval: 2592000 6.4.2.9. nsslapd-db-compactdb-time The nsslapd-db-compactdb-time attribute sets the time of the day when Directory Server compacts all databases and their replication changelogs. The compaction task runs after the compaction interval ( nsslapd-db-compactdb-interval ) has been exceeded. The attribute change does not require the server restart. However, Directory Server applies the new time value when the compaction interval set in nsslapd-db-compactdb-interval expires. For example, the compaction is planned today at 10:40. Then at 10:35, 5 minutes before the planned compaction, you set the new interval ( nsslapd-db-compactdb-interval ) to 259200 seconds (3 days) and the new compaction time ( nsslapd-db-compactdb-time ) to 20:30 . Now Directory Server discards the compaction planned for today at 10:40 and performs the compaction in 3 days at 20:30. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values HH:MM. Time is set in 24-hour format Default Value 23:59 Syntax DirectoryString Example nsslapd-db-compactdb-time: 23:59 6.4.2.10. nsslapd-db-home-directory This parameter specifies the location of memory-mapped files of Directory Server databases. For performance reasons, the default value of this parameter refers to the /dev/shm/ directory, which uses a tmpfs file system.. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid directory Default Value /dev/shm/ Syntax DirectoryString Example nsslapd-db-home-directory: /dev/shm/ 6.4.2.11. nsslapd-db-idl-divisor This attribute specifies the index block size in terms of the number of blocks per database page. The block size is calculated by dividing the database page size by the value of this attribute. A value of 1 makes the block size exactly equal to the page size. The default value of 0 sets the block size to the page size minus an estimated allowance for internal database overhead. For the majority of installations, the default value should not be changed unless there are specific tuning needs. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Warning This parameter should only be used by very advanced users. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 8 Default Value 0 Syntax Integer Example nsslapd-db-idl-divisor: 2 6.4.2.12. nsslapd-db-locks Lock mechanisms in Directory Server control how many copies of Directory Server processes can run at the same time. The nsslapd-db-locks parameter sets the maximum number of locks. Only set this parameter to a higher value if Directory Server runs out of locks and logs libdb: Lock table is out of available locks error messages. If you set a higher value without a need, this increases the size of the /var/lib/dirsrv/slapd- instance_name /db__db.* files without any benefit. The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 Default Value 10000 Syntax Integer Example nsslapd-db-locks: 10000 6.4.2.13. nsslapd-db-locks-monitoring-enabled Running out of database locks can lead to data corruption. With the nsslapd-db-locks-monitoring-enabled parameter, you can enable or disable database lock monitoring. If the parameter is enabled, which is the default, Directory Server terminates all searches if the number of active database locks is higher than the percentage threshold configured in nsslapd-db-locks-monitoring-threshold . If an issue occurs, the administrator can increase the number of database locks in the nsslapd-db-locks parameter. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-locks-monitoring-enabled: on 6.4.2.14. nsslapd-db-locks-monitoring-pause If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-pause defines the interval in milliseconds that the monitoring thread sleeps between the checks. If you set this parameter to a too high value, the server can run out of database locks before the monitoring check happens. However, setting a too low value can slow down the server. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 - 2147483647 (value in milliseconds) Default Value 500 Syntax DirectoryString Example nsslapd-db-locks-monitoring-pause: 500 6.4.2.15. nsslapd-db-locks-monitoring-threshold If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-threshold sets the maximum percentage of used database locks before Directory Server terminates searches to avoid further lock exhaustion. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 70 - 95 Default Value 90 Syntax DirectoryString Example nsslapd-db-locks-monitoring-threshold: 90 6.4.2.16. nsslapd-db-logbuf-size This attribute specifies the log information buffer size. Log information is stored in memory until the buffer fills up or the transaction commit forces the buffer to be written to disk. Larger buffer sizes can significantly increase throughput in the presence of long running transactions, highly concurrent applications, or transactions producing large amounts of data. The log information buffer size is the transaction log size divided by four. The nsslapd-db-logbuf-size attribute is only valid if the nsslapd-db-durable-transactions attribute is set to on . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 32K to maximum 32-bit integer (limited to the amount of memory available on the machine) Default Value 32K Syntax Integer Example nsslapd-db-logbuf-size: 32K 6.4.2.17. nsslapd-db-logdirectory This attribute specifies the path to the directory that contains the database transaction log. The database transaction log contains a sequential listing of all recent database operations. Directory Server uses this information to recover the database after an instance shut down unexpectedly. By default, the database transaction log is stored in the same directory as the directory database. To update this parameter, you must manually update the /etc/dirsrv/slapd- instance_name /dse.ldif file. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path Default Value Syntax DirectoryString Example nsslapd-db-logdirectory: /var/lib/dirsrv/slapd- instance_name /db/ 6.4.2.18. nsslapd-db-logfile-size This attribute specifies the maximum size of a single file in the log in bytes. By default, or if the value is set to 0 , a maximum size of 10 megabytes is used. The maximum size is an unsigned 4-byte value. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to unsigned 4-byte integer Default Value 10MB Syntax Integer Example nsslapd-db-logfile-size: 10 MB 6.4.2.19. nsslapd-dbncache This attribute can split the LDBM cache into equally sized separate pieces of memory. It is possible to specify caches that are large enough so that they cannot be allocated contiguously on some architectures; for example, some systems limit the amount of memory that may be allocated contiguously by a process. If nsslapd-dbncache is 0 or 1 , the cache will be allocated contiguously in memory. If it is greater than 1 , the cache will be broken up into ncache , equally sized separate pieces of memory. To configure a dbcache size larger than 4 gigabytes, add the nsslapd-dbncache attribute to cn=config,cn=ldbm database,cn=plugins,cn=config between the nsslapd-dbcachesize and nsslapd-db-logdirectory attribute lines. Set this value to an integer that is one-quarter (1/4) the amount of memory in gigabytes. For example, for a 12 gigabyte system, set the nsslapd-dbncache value to 3 ; for an 8 gigabyte system, set it to 2 . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat technical support or Red Hat professional services. Inconsistent settings of this attribute and other configuration attributes may cause Directory Server to be unstable. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 1 to 4 Default Value 1 Syntax Integer Example nsslapd-dbncache: 1 6.4.2.20. nsslapd-db-page-size This attribute specifies the size of the pages used to hold items in the database in bytes. The minimum size is 512 bytes, and the maximum size is 64 kilobytes. If the page size is not explicitly set, Directory Server defaults to a page size of 8 kilobytes. Changing this default value can have a significant performance impact. If the page size is too small, it results in extensive page splitting and copying, whereas if the page size is too large it can waste disk space. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 512 bytes to 64 kilobytes Default Value 8KB Syntax Integer Example nsslapd-db-page-size: 8KB 6.4.2.21. nsslapd-db-spin-count This attribute specifies the number of times that test-and-set mutexes should spin without blocking. Warning Never touch this value unless you are very familiar with the inner workings of Berkeley DB or are specifically told to do so by Red Hat support. The default value of 0 causes BDB to calculate the actual value by multiplying the number of available CPU cores (as reported by the nproc utility or the sysconf(_SC_NPROCESSORS_ONLN) call) by 50 . For example, with a processor with 8 logical cores, leaving this attribute set to 0 is equivalent to setting it to 400 . It is not possible to turn spinning off entirely - if you want to minimize the amount of times test-and-set mutexes will spin without blocking, set this attribute to 1 . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 2147483647 (2^31-1) Default Value 0 Syntax Integer Example nsslapd-db-spin-count: 0 6.4.2.22. nsslapd-db-transaction-batch-max-wait If nsslapd-db-transaction-batch-val is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed latest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-max-wait: 50 6.4.2.23. nsslapd-db-transaction-batch-min-wait If nsslapd-db-transaction-batch-val is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed earliest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-min-wait: 50 6.4.2.24. nsslapd-db-transaction-batch-val This attribute specifies how many transactions will be batched before being committed. This attribute can improve update performance when full transaction durability is not required. This attribute can be dynamically modified using ldapmodify . Warning Setting this value will reduce data consistency and may lead to loss of data. This is because if there is a power outage before the server can flush the batched transactions, those transactions in the batch will be lost. Do not set this value unless specifically requested to do so by Red Hat support. If this attribute is not defined or is set to a value of 0 , transaction batching will be turned off, and it will be impossible to make remote modifications to this attribute using LDAP. However, setting this attribute to a value greater than 0 causes the server to delay committing transactions until the number of queued transactions is equal to the attribute value. A value greater than 0 also allows modifications to this attribute remotely using LDAP. A value of 1 for this attribute allows modifications to the attribute setting remotely using LDAP, but results in no batching behavior. A value of 1 at server startup is therefore useful for maintaining normal durability while also allowing transaction batching to be turned on and off remotely when required. Remember that the value for this attribute may require modifying the nsslapd-db-logbuf-size attribute to ensure sufficient log buffer size for accommodating the batched transactions. Note The nsslapd-db-transaction-batch-val attribute is only valid if the nsslapd-db-durable-transaction attribute is set to on . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 30 Default Value 0 (or turned off) Syntax Integer Example nsslapd-db-transaction-batch-val: 5 6.4.2.25. nsslapd-db-trickle-percentage This attribute sets that at least the specified percentage of pages in the shared-memory pool are clean by writing dirty pages to their backing files. This is to ensure that a page is always available for reading in new information without having to wait for a write. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100 Default Value 40 Syntax Integer Example nsslapd-db-trickle-percentage: 40 6.4.2.26. nsslapd-db-verbose This attribute specifies whether to record additional informational and debugging messages when searching the log for checkpoints, doing deadlock detection, and performing recovery. This parameter is meant for troubleshooting, and enabling the parameter may slow down Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-verbose: off 6.4.2.27. nsslapd-import-cache-autosize This performance tuning-related attribute automatically sets the size of the import cache ( importCache ) to be used during the command-line-based import process of LDIF files to the database (the ldif2db operation). In Directory Server, the import operation can be run as a server task or exclusively on the command-line. In the task mode, the import operation runs as a general Directory Server operation. The nsslapd-import-cache-autosize attribute enables the import cache to be set automatically to a predetermined size when the import operation is run on the command-line. The attribute can also be used by Directory Server during the task mode import for allocating a specified percentage of free memory for import cache. By default, the nsslapd-import-cache-autosize attribute is enabled and is set to a value of -1 . This value autosizes the import cache for the ldif2db operation only, automatically allocating fifty percent (50%) of the free physical memory for the import cache. The percentage value (50%) is hard-coded and cannot be changed. Setting the attribute value to 50 ( nsslapd-import-cache-autosize: 50 ) has the same effect on performance during an ldif2db operation. However, such a setting will have the same effect on performance when the import operation is run as a Directory Server task. The -1 value autosizes the import cache just for the ldif2db operation and not for any, including import, general Directory Server tasks. Note The purpose of a -1 setting is to enable the ldif2db operation to benefit from free physical memory but, at the same time, not compete for valuable memory with the entry cache, which is used for general operations of Directory Server. Setting the nsslapd-import-cache-autosize attribute value to 0 turns off the import cache autosizing feature - that is, no autosizing occurs during either mode of the import operation. Instead, Directory Server uses the nsslapd-import-cachesize attribute for import cache size, with a default value of 20000000 . There are three caches in the context of Directory Server: database cache, entry cache, and import cache. The import cache is only used during the import operation. The nsslapd-cache-autosize attribute, which is used for autosizing the entry cache and database cache, is used during the Directory Server operations only and not during the ldif2db command-line operation; the attribute value is the percentage of free physical memory to be allocated for the entry cache and database cache. If both the autosizing attributes, nsslapd-cache-autosize and nsslapd-import-cache-autosize , are enabled, ensure that their sum is less than 100. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1, 0 (turns import cache autosizing off) to 100 Default Value -1 (turns import cache autosizing on for ldif2db only and allocates 50% of the free physical memory to import cache) Syntax Integer Example nsslapd-import-cache-autosize: -1 6.4.2.28. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 6.4.3. Database attributes under cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config The section covers global Lightning Memory-Mapped Database Manager (LMDB) configuration attributes that are stored in the cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config tree node and are common to all instances. 6.4.3.1. nsslapd-mdb-max-dbs The nsslapd-mdb-max-dbs attribute sets the maximum number of named database instances that can be included within the memory mapped database file. If the attribute value is set to zero ( 0 ), Directory Server computes this attribute value. Each suffix and default indexes consume 35 named databases. Each additional index consumes one named database. With the default value of 512, you can create up to 14 suffixes. To apply changes to the attribute value, you must restart the server. Parameter Description Entry DN cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0-2147483647 Default Value 512 Syntax Integer Example nsslapd-mdb-max-dbs: 512 6.4.3.2. nsslapd-mdb-max-readers The nsslapd-mdb-max-readers attribute sets the maximun number of read operations that can be opened simultaneously. If the attribute value is set to zero ( 0 ), Directory Server computes this attribute value. To apply changes to the attribute value, you must restart the server. Parameter Description Entry DN cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0-2147483647 Default Value 0 Syntax Integer Example nsslapd-mdb-max-readers:0 6.4.3.3. nsslapd-mdb-max-size The nsslapd-mdb-max-size attribute sets the database maximum size in bytes. The maximum size of the Lightning Memory-Mapped Database Manager (LMDB) database is limited by the system addressable memory. Important Make sure that the value of nsslapd-mdb-max-size is high enough to store all intended data. However, the value must not be too high to impact the performance because the database file is memory-mapped. You can use the database size in the Directory Server Hardware requirements for a reference. To apply changes to the attribute value, you must restart the server. Parameter Description Entry DN cn=mdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 512 to maximum 32-bit integer (limited to the amount of memory available on the machine) Default Value 21474836480 Syntax Integer Example nsslapd-mdb-max-size:21474836480 6.4.4. Database attributes under cn=monitor,cn=ldbm database,cn=plugins,cn=config Global read-only attributes containing database statistics for monitoring activity on the databases are stored in the cn=monitor,cn=ldbm database,cn=plugins,cn=config tree node. 6.4.4.1. currentNormalizedDNcachecount Number of normalized cached DNs. 6.4.4.2. currentNormalizedDNcachesize Current size of the normalized DN cache in bytes. 6.4.4.3. dbcachehitratio This attribute shows the percentage of requested pages found in the database cache (hits/tries). 6.4.4.4. dbcachehits This attribute shows the requested pages found in the database. 6.4.4.5. dbcachepagein This attribute shows the pages read into the database cache. 6.4.4.6. dbcachepageout This attribute shows the pages written from the database cache to the backing file. 6.4.4.7. dbcacheroevict This attribute shows the clean pages forced from the cache. 6.4.4.8. dbcacherwevict This attribute shows the dirty pages forced from the cache. 6.4.4.9. dbcachetries This attribute shows the total cache lookups. 6.4.4.10. maxNormalizedDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 2.1.135, "nsslapd-ndn-cache-max-size" . 6.4.4.11. normalizedDNcachehitratio Percentage of the normalized DNs found in the cache. 6.4.4.12. normalizedDNcachehits Normalized DNs found within the cache. 6.4.4.13. normalizedDNcachemisses Normalized DNs not found within the cache. 6.4.4.14. normalizedDNcachetries Total number of cache lookups since the instance was started. 6.4.5. Database attributes under cn= database_name ,cn=ldbm database,cn=plugins,cn=config The cn= database_name subtree contains all the configuration data for the user-defined database. The cn=userRoot subtree is called userRoot by default. However, this is not hard-coded and, given the fact that there are going to be multiple database instances, this name is changed and defined by the user as and when new databases are added. The cn=userRoot database referenced can be any user database. The following attributes are common to databases, such as cn=userRoot . 6.4.5.1. nsslapd-cachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the entry cache. The simplest method is limiting cache size in terms of memory occupied. Activating automatic cache resizing overrides this attribute, replacing these values with its own guessed values at a later stage of the server startup. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Note Do not set the database cache size manually. Red Hat recommends to use the entry cache auto-sizing feature for optimized performance. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 64 -1 on 64-bit systems Default Value 209715200 (200 MiB) Syntax Integer Example nsslapd-cachememsize: 209715200 6.4.5.2. nsslapd-cachesize This attribute has been deprecated. To resize the entry cache, use nsslapd-cachememsize. This performance tuning-related attribute specifies the cache size in terms of the number of entries it can hold. However, this attribute is deprecated in favor of the nsslapd-cachememsize attribute, which sets an absolute allocation of RAM for the entry cache size, as described in Section 6.4.5.1, "nsslapd-cachememsize" Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The server has to be restarted for changes to this attribute to go into effect. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 1 to 2 32 -1 on 32-bit systems or 2 63 -1 on 64-bit systems or -1, which means limitless Default Value -1 Syntax Integer Example nsslapd-cachesize: -1 6.4.5.3. nsslapd-directory This attribute specifies the path to the database instance. If it is a relative path, it starts from the path specified by nsslapd-directory in the global database entry cn=config,cn=ldbm database,cn=plugins,cn=config . The database instance directory is named after the instance name and located in the global database directory, by default. After the database instance has been created, do not modify this path, because any changes risk preventing the server from accessing data. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db/userRoot 6.4.5.4. nsslapd-dncachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the DN cache. The DN cache is similar to the entry cache for a database, only its table stores only the entry ID and the entry DN. This allows faster lookups for rename and moddn operations. The simplest method is limiting cache size in terms of memory occupied. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 32 -1 on 32-bit systems and to 2 64 -1 on 64-bit systems Default Value 10,485,760 (10 megabytes) Syntax Integer Example nsslapd-dncachememsize: 10485760 6.4.5.5. nsslapd-readonly This attribute specifies read-only mode for a single back-end instance. If this attribute has a value of off , then users have all read, write, and execute permissions allowed by their access permissions. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-readonly: off 6.4.5.6. nsslapd-require-index When switched to on , this attribute allows one to refuse unindexed searches. This performance-related attribute avoids saturating the server with erroneous searches. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-index: off 6.4.5.7. nsslapd-require-internalop-index When a plug-in modifies data, it has a write lock on the database. On large databases, if a plug-in then executes an unindexed search, the plug-in can use all database locks and corrupt the database or the server becomes unresponsive. To avoid this problem, you can reject internal unindexed searches by enabling the nsslapd-require-internalop-index parameter. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-internalop-index: off 6.4.5.8. nsslapd-suffix This attribute specifies the suffix of the database link . This is a single-valued attribute because each database instance can have only one suffix. Previously, it was possible to have more than one suffix on a single database instance, but this is no longer the case. As a result, this attribute is single-valued to enforce the fact that each database instance can only have one suffix entry. Any changes made to this attribute after the entry has been created take effect only after the server containing the database link is restarted. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsslapd-suffix: o=Example 6.4.5.9. vlvBase This attribute sets the base DN for which the browsing or virtual list view (VLV) index is created. Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example vlvBase: ou=People,dc=example,dc=com 6.4.5.10. vlvEnabled The vlvEnabled attribute provides status information about a specific VLV index, and Directory Server sets this attribute at run time. Although vlvEnabled is shown in the configuration, you cannot modify this attribute. Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (disabled) | 1 (enabled) Default Value 1 Syntax DirectoryString Example vlvEnbled: 0 6.4.5.11. vlvFilter The browsing or virtual list view (VLV) index is created by running a search according to a filter and including entries which match that filter in the index. The filter is specified in the vlvFilter attribute. Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value Syntax DirectoryString Example vlvFilter: (|(objectclass=*)(objectclass=ldapsubentry)) 6.4.5.12. vlvIndex A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvIndex object class defines the index entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.42 Table 6.2. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. vlvSort Identifies the attribute list that the browsing index (virtual list view index) is sorted on. Table 6.3. Allowed Attributes Attribute Definition vlvEnabled Stores the availability of the browsing index. vlvUses Contains the count the browsing index is used. 6.4.5.13. vlvScope This attribute sets the scope of the search to run for entries in the browsing or virtual list view (VLV) index. Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values * 1 (one-level or children search) * 2 (subtree search) Default Value Syntax Integer Example vlvScope: 2 6.4.5.14. vlvSearch A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvSearch object class defines the search filter entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.38 Table 6.4. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. vlvBase Identifies base DN the browsing index is created. vlvScope Identifies the scope to define the browsing index. vlvFilter Identifies the filter string to define the browsing index. Table 6.5. Allowed Attributes Attribute Definition multiLineDescription Gives a text description of the entry. 6.4.5.15. vlvSort This attribute sets the sort order for returned entries in the browsing or virtual list view (VLV) index. Note The entry for this attribute is a vlvIndex entry beneath the vlvSearch entry. Parameter Description Entry DN cn= index_name ,cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any Directory Server attributes, in a space-separated list Default Value Syntax DirectoryString Example vlvSort: cn givenName o ou sn 6.4.5.16. vlvUses The vlvUses attribute contains the count the browsing index uses, and Directory Server sets this attribute at run time. Although vlvUses is shown in the configuration, you cannot modify this attribute. Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values N/A Default Value Syntax DirectoryString Example vlvUses: 800 6.4.6. Database attributes under cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. All of the values for these attributes are 32-bit integers, except for entrycachehits and entrycachetries . If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For the database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. 6.4.6.1. currentdncachecount This attribute shows the number of DNs currently present in the DN cache. 6.4.6.2. currentdncachesize This attribute shows the total size, in bytes, of DNs currently present in the DN cache. 6.4.6.3. maxdncachesize This attribute shows the maximum size, in bytes, of DNs that can be maintained in the database DN cache. 6.4.6.4. nsslapd-db-abort-rate This attribute shows the number of transactions that have been aborted. 6.4.6.5. nsslapd-db-active-txns This attribute shows the number of transactions that are currently active. 6.4.6.6. nsslapd-db-cache-hit This attribute shows the requested pages found in the cache. 6.4.6.7. nsslapd-db-cache-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. 6.4.6.8. nsslapd-db-cache-size-bytes This attribute shows the total cache size in bytes. 6.4.6.9. nsslapd-db-cache-try This attribute shows the total cache lookups. 6.4.6.10. nsslapd-db-clean-pages This attribute shows the clean pages currently in the cache. 6.4.6.11. nsslapd-db-commit-rate This attribute shows the number of transactions that have been committed. 6.4.6.12. nsslapd-db-deadlock-rate This attribute shows the number of deadlocks detected. 6.4.6.13. nsslapd-db-dirty-pages This attribute shows the dirty pages currently in the cache. 6.4.6.14. nsslapd-db-hash-buckets This attribute shows the number of hash buckets in buffer hash table. 6.4.6.15. nsslapd-db-hash-elements-examine-rate This attribute shows the total number of hash elements traversed during hash table lookups. 6.4.6.16. nsslapd-db-hash-search-rate This attribute shows the total number of buffer hash table lookups. 6.4.6.17. nsslapd-db-lock-conflicts This attribute shows the total number of locks not immediately available due to conflicts. 6.4.6.18. nsslapd-db-lockers This attribute shows the number of current lockers. 6.4.6.19. nsslapd-db-lock-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. 6.4.6.20. nsslapd-db-lock-request-rate This attribute shows the total number of locks requested. 6.4.6.21. nsslapd-db-log-bytes-since-checkpoint This attribute shows the number of bytes written to this log since the last checkpoint. 6.4.6.22. nsslapd-db-log-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. 6.4.6.23. nsslapd-db-log-write-rate This attribute shows the number of megabytes and bytes written to this log. 6.4.6.24. nsslapd-db-longest-chain-length This attribute shows the longest chain ever encountered in buffer hash table lookups. 6.4.6.25. nsslapd-db-page-create-rate This attribute shows the pages created in the cache. 6.4.6.26. nsslapd-db-page-read-rate This attribute shows the pages read into the cache. 6.4.6.27. nsslapd-db-page-ro-evict-rate This attribute shows the clean pages forced from the cache. 6.4.6.28. nsslapd-db-page-rw-evict-rate This attribute shows the dirty pages forced from the cache. 6.4.6.29. nsslapd-db-pages-in-use This attribute shows all pages, clean or dirty, currently in use. 6.4.6.30. nsslapd-db-page-trickle-rate This attribute shows the dirty pages written using the memp_trickle interface. 6.4.6.31. nsslapd-db-page-write-rate This attribute shows the pages read into the cache. 6.4.6.32. nsslapd-db-txn-region-wait-rate This attribute shows the number of times that a thread of control was force to wait before obtaining the region lock. 6.4.7. Database attributes under cn=changelog,cn=database_name,cn=ldbm database,cn=plugins,cn=config In the multi-supplier replication, Directory Server stores changelog configuration entries under the cn=changelog,cn=database_name,cn=ldbm database,cn=plugins,cn=config entry that has top and extensibleObject object classes. Note The term changelog may refer to: Changelog The actual changelog in the multi-supplier replication that uses attributes described in this chapter. Retro Changelog The plug-in that Directory Server uses for compatibility with certain legacy applications. Fore more information, see Section 6.3.49, "Retro Changelog plug-in" . 6.4.7.1. cn The cn attribute sets the relative distinguished name (RDN) of a changelog entry. This attribute is mandatory. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any string Default Value changelog Syntax DirectoryString Example cn=changelog,cn=userRoot,cn=ldbm database,cn=plugins 6.4.7.2. nsslapd-changelogmaxage When synchronizing with a consumer, Directory Server stores each update in the changelog with a time stamp. The nsslapd-changelogmaxage attribute sets the maximum age of a record stored in the changelog. Directory Server automatically removes older records that were successfully transferred to all consumers. By default, Directory Server removes records that are older than seven days. However, if you disable the nsslapd-changelogmaxage and nsslapd-changelogmaxentries attributes, Directory Server will keep all records in the changelog, and it can lead to the excessive growth of the changelog file. Note Retro changelog has its own nsslapd-changelogmaxage attribute. For more information, see Retro changelog nsslapd-changelogmaxage The attribute change does not require the server restart, however the change takes effect after the trim operation that is scheduled according to the nsslapd-changelogtrim-interval attribute setting. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 0 (entries are not removed according to their age) to maximum 32-bit integer (2147483647) Default Value 7d Syntax DirectoryString IntegerAgeID , where AgeID is: s ( S ) for seconds m ( M ) for minutes h ( H ) for hours d ( D ) for days w ( W ) for weeks If you set only the integer value without the AgeID then Directory Server takes it as seconds. Example nsslapd-changelogmaxage: 30d 6.4.7.3. nsslapd-changelogmaxentries The nsslapd-changelogmaxentries attribute sets the maximum number of records stored in the changelog. If the number of the oldest records that were successfully transferred to all consumers exceeds the nsslapd-changelogmaxentries value, Directory Server automatically removes these records from the changelog. If you set the nsslapd-changelogmaxentries and nsslapd-changelogmaxage attribute to 0 , Directory Server keeps all records in the changelog, which can lead to the excessive growth of the changelog file. Note Directory Server does not automatically reduce the file size of the replication changelog if you set a lower value in the nsslapd-changelogmaxentries attribute. The attribute change does not require the server restart, however the change takes effect after the trim operation that is scheduled according to the nsslapd-changelogtrim-interval attribute setting. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 0 (the only maximum limit is the disk size) to maximum 32-bit integer (2147483647) Default Value 0 Syntax Integer Example nsslapd-changelogmaxentries: 5000 6.4.7.4. nsslapd-changelogtrim-interval Directory Server repeatedly runs a trimming process on the changelog. To change the time between two runs, update the nsslapd-changelogtrim-interval attribute and set the interval in seconds. The attribute change does not require the server restart, however the change takes effect after the trim operation. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to the maximum 32 bit integer value (2147483647) Default Value 300 (5 minutes) Syntax DirectoryString Example nsslapd-changelogtrim-interval: 300 6.4.7.5. nsslapd-encryptionalgorithm The nsslapd-encryptionalgorithm attribute specifies the encryption algorithm Directory Server uses for the changelog encryption. To enable the changelog encryption, you must install the server certificate on the directory server. You must restart the server to apply the attribute value changes. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range AES or 3DES Default Value None Syntax DirectoryString Example nsslapd-encryptionalgorithm: AES 6.4.7.6. nsSymmetricKey The nsSymmetricKey attribute stores the internally-generated symmetric key. You must restart the server to apply the attribute value changes. Parameter Description Entry DN cn=changelog,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range Base 64-encoded key Default Value None Syntax DirectoryString Example None 6.4.8. Database attributes under cn=monitor,cn= database_name ,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. 6.4.8.1. currentDNcachecount Number of cached DNs. 6.4.8.2. currentDNcachesize Current size of the DN cache in bytes. 6.4.8.3. dbfilecachehit- number This attribute gives the number of times that a search requiring data from this file was performed and that the data were successfully obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . 6.4.8.4. dbfilecachemiss- number This attribute gives the number of times that a search requiring data from this file was performed and that the data could not be obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . 6.4.8.5. dbfilename- number This attribute gives the name of the file and provides a sequential integer identifier (starting at 0) for the file. All associated statistics for the file are given this same numerical identifier. 6.4.8.6. dbfilepagein- number This attribute gives the number of pages brought to the cache from this file. The number in this attributes name corresponds to the one in dbfilename . 6.4.8.7. dbfilepageout- number This attribute gives the number of pages for this file written from cache to disk. The number in this attributes name corresponds to the one in dbfilename . 6.4.8.8. DNcachehitratio Percentage of the DNs found in the cache. 6.4.8.9. DNcachehits DNs found within the cache. 6.4.8.10. DNcachemisses DNs not found within the cache. 6.4.8.11. DNcachetries Total number of cache lookups since the instance was started. 6.4.8.12. maxDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 2.1.135, "nsslapd-ndn-cache-max-size" . 6.4.9. Database attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config The set of default indexes is stored here. Default indexes are configured per back end in order to optimize Directory Server functionality for the majority of setup scenarios. All indexes, except system-essential ones, can be removed, but care should be taken so as not to cause unnecessary disruptions. 6.4.9.1. cn This attribute provides the name of the attribute to index. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid index cn Default Value None Syntax DirectoryString Example cn: aci 6.4.9.2. nsIndex This object class defines an index in the back end database. This object is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.44 Table 6.6. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. nsSystemIndex Identify whether or not the index is a system defined index. Table 6.7. Allowed Attributes Attribute Definition description Gives a text description of the entry. nsIndexType Identifies the index type. nsMatchingRule Identifies the matching rule. 6.4.9.3. nsIndexType This optional, multi-valued attribute specifies the type of index for Directory Server operations and takes the values of the attributes to be indexed. Each required index type has to be entered on a separate line. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values * pres = presence index * eq = equality index * approx = approximate index * sub = substring index * matching rule = international index * index browse = browsing index Default Value Syntax DirectoryString Example nsIndexType: eq 6.4.9.4. nsMatchingRule This optional, multi-valued attribute specifies the ordering matching rule name or OID used to match values and to generate index keys for the attribute. This is most commonly used to ensure that equality and range searches work correctly for languages other than English (7-bit ASCII). This is also used to allow range searches to work correctly for integer syntax attributes that do not specify an ordering matching rule in their schema definition. uidNumber and gidNumber are two commonly used attributes that fall into this category. For example, for a uidNumber that uses integer syntax, the rule attribute could be nsMatchingRule: integerOrderingMatch . Note Any change to this attribute will not take effect until the change is saved and the index is rebuilt using db2index command. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid collation order object identifier (OID) Default Value None Syntax DirectoryString Example nsMatchingRule: 2.16.840.1.113730.3.3.2.3.1 (For Bulgarian) 6.4.9.5. nsSystemIndex This mandatory attribute specifies whether the index is a system index , an index which is vital for Directory Server operations. If this attribute has a value of true , then it is system-essential. System indexes should not be removed, as this will seriously disrupt server functionality. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values true | false Default Value Syntax DirectoryString Example nssystemindex: true 6.4.10. Database attributes under cn=index, cn=database_name ,cn=ldbm database,cn=plugins,cn=config In addition to the set of default indexes that are stored under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , custom indexes can be created for user-defined back end instances; these are stored under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . For example, the index file for the aci attribute under o=UserRoot appears in Directory Server as follows: dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres These entries share all of the indexing attributes listed for the default indexes in Section 6.4.9, "Database attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config" . 6.4.10.1. nsIndexIDListScanLimit This multi-valued parameter defines a search limit for certain indices or to use no ID list. Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Default Value Syntax DirectoryString Example nsIndexIDListScanLimit: limit=0 type=eq values=inetorgperson 6.4.10.2. nsSubStrBegin By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrBegin attribute sets the required number of characters for an indexed search for the beginning of a search string, before the wildcard. For example: abc* If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrBegin: 2 6.4.10.3. nsSubStrEnd By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrEnd attribute sets the required number of characters for an indexed search for the end of a search string, after the wildcard. For example: *xyz If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrEnd: 2 6.4.10.4. nsSubStrMiddle By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrMiddle attribute sets the required number of characters for an indexed search where a wildcard is used in the middle of a search string. For example: ab*z If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrMiddle: 3 6.4.11. Database attributes under cn= attribute_name ,cn=encrypted attributes,cn= database_name ,cn=ldbm database,cn=plugins,cn=config In addition to the set of default indexes that are stored under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , custom indexes can be created for user-defined back end instances; these are stored under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . For example, the index file for the aci attribute under o=UserRoot appears in Directory Server as follows: dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres These entries share all of the indexing attributes listed for the default indexes in Section 6.4.9, "Database attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config" . 6.4.11.1. nsIndexIDListScanLimit This multi-valued parameter defines a search limit for certain indices or to use no ID list. Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Default Value Syntax DirectoryString Example nsIndexIDListScanLimit: limit=0 type=eq values=inetorgperson 6.4.11.2. nsSubStrBegin By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrBegin attribute sets the required number of characters for an indexed search for the beginning of a search string, before the wildcard. For example: abc* If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrBegin: 2 6.4.11.3. nsSubStrEnd By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrEnd attribute sets the required number of characters for an indexed search for the end of a search string, after the wildcard. For example: *xyz If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrEnd: 2 6.4.11.4. nsSubStrMiddle By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrMiddle attribute sets the required number of characters for an indexed search where a wildcard is used in the middle of a search string. For example: ab*z If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrMiddle: 3 6.5. Database link plug-in attributes The database link plug-in attributes are also organized in an information tree. All plug-in technology used by the database link instances is stored in the cn=chaining database plug-in node. This section presents the additional attribute information for the three nodes marked in bold in the cn=chaining database,cn=plugins,cn=config information tree in diagram. 6.5.1. Database link attributes under cn=config,cn=chaining database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=chaining database,cn=plugins,cn=config tree node. 6.5.1.1. nsActiveChainingComponents This attribute lists the components using chaining. A component is any functional unit in the server. The value of this attribute overrides the value in the global configuration attribute. To disable chaining on a particular database instance, use the value None . This attribute also allows the components used to chain to be altered. By default, no components are allowed to chain, which explains why this attribute will probably not appear in a list of cn=config,cn=chaining database,cn=config attributes, as LDAP considers empty attributes to be non-existent. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid component entry Default Value None Syntax DirectoryString Example nsActiveChainingComponents: cn=uid uniqueness,cn=plugins,cn=config 6.5.1.2. nsMaxResponseDelay This error detection, performance-related attribute specifies the maximum amount of time it can take a remote server to respond to an LDAP operation request made by a database link before an error is suspected. Once this delay period has been met, the database link tests the connection with the remote server. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 60 seconds Syntax Integer Example nsMaxResponseDelay: 60 6.5.1.3. nsMaxTestResponseDelay This error detection, performance-related attribute specifies the duration of the test issued by the database link to check whether the remote server is responding. If a response from the remote server is not returned before this period has passed, the database link assumes the remote server is down, and the connection is not used for subsequent operations. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 15 seconds Syntax Integer Example nsMaxTestResponseDelay: 15 6.5.1.4. nsTransmittedControls This attribute, which can be both a global (and thus dynamic) configuration or an instance (that is, cn= database link instance , cn=chaining database,cn=plugins,cn=config ) configuration attribute, allows the controls the database link forwards to be altered. The following controls are forwarded by default by the database link: Managed DSA (OID: 2.16.840.1.113730.3.4.2) Virtual list view (VLV) (OID: 2.16.840.1.113730.3.4.9) Server side sorting (OID: 1.2.840.113556.1.4.473) Loop detection (OID: 1.3.6.1.4.1.1466.29539.12) Other controls, such as dereferencing and simple paged results for searches, can be added to the list of controls to forward. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid OID or the above listed controls forwarded by the database link Default Value None Syntax Integer Example nsTransmittedControls: 1.2.840.113556.1.4.473 6.5.2. Database link attributes under cn=default instance config,cn=chaining database,cn=plugins,cn=config Default instance configuration attributes for instances are housed in the cn=default instance config,cn=chaining database,cn=plugins,cn=config tree node. 6.5.2.1. nsAbandonedSearchCheckInterval This attribute shows the number of seconds that pass before the server checks for abandoned operations. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) seconds Default Value 1 Syntax Integer Example nsAbandonedSearchCheckInterval: 10 6.5.2.2. nsBindConnectionsLimit This attribute shows the maximum number of TCP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 connections Default Value 3 Syntax Integer Example nsBindConnectionsLimit: 3 6.5.2.3. nsBindRetryLimit Contrary to what the name suggests, this attribute does not specify the number of times a database link re tries to bind with the remote server but the number of times it tries to bind with the remote server. A value of 1 here indicates that the database link only attempts to bind once. Note Retries only occur for connection failures and not for other types of errors, such as invalid bind DNs or bad passwords. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 5 Default Value 3 Syntax Integer Example nsBindRetryLimit: 3 6.5.2.4. nsBindTimeout This attribute shows the amount of time before the bind attempt times out. There is no real valid range for this attribute, except reasonable patience limits. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 60 seconds Default Value 15 Syntax Integer Example nsBindTimeout: 15 6.5.2.5. nsCheckLocalACI Reserved for advanced use only. This attribute controls whether ACIs are evaluated on the database link as well as the remote data server. Changes to this attribute only take effect once the server has been restarted. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsCheckLocalACI: on 6.5.2.6. nsConcurrentBindLimit This attribute shows the maximum number of concurrent bind operations per TCP connection. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 25 binds Default Value 10 Syntax Integer Example nsConcurrentBindLimit: 10 6.5.2.7. nsConcurrentOperationsLimit This attribute specifies the maximum number of concurrent operations allowed. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 operations Default Value 2 Syntax Integer Example nsConcurrentOperationsLimit: 5 6.5.2.8. nsConnectionLife This attribute specifies connection lifetime. Connections between the database link and the remote server can be kept open for an unspecified time or closed after a specific period of time. It is faster to keep the connections open, but it uses more resources. When the value is 0 and a list of failover servers is provided in the nsFarmServerURL attribute, the main server is never contacted after failover to the alternate server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to limitless seconds (where 0 means forever) Default Value 0 Syntax Integer Example nsConnectionLife: 0 6.5.2.9. nsOperationConnectionsLimit This attribute shows the maximum number of LDAP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to n connections Default Value 20 Syntax Integer Example nsOperationConnectionsLimit: 10 6.5.2.10. nsProxiedAuthorization Reserved for advanced use only. If you disable proxied authorization, binds for chained operations are executed as the user set in the nsMultiplexorBindDn attribute. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsProxiedAuthorization: on 6.5.2.11. nsReferralOnScopedSearch This attribute controls whether referrals are returned by scoped searches. This attribute can be used to optimize the directory because returning referrals in response to scoped searches is more efficient. A referral is returned to all the configured farm servers. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsReferralOnScopedSearch: off 6.5.2.12. nsSizeLimit This attribute shows the default size limit for the database link in bytes. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 (no limit) to maximum 32-bit integer (2147483647) entries Default Value 2000 Syntax Integer Example nsSizeLimit: 2000 6.5.2.13. nsTimeLimit This attribute shows the default search time limit for the database link. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer (2147483647) seconds Default Value 3600 Syntax Integer Example nsTimeLimit: 3600 6.5.3. Database link attributes under cn= database_link_name ,cn=chaining database,cn=plugins,cn=config This information node stores the attributes concerning the server containing the data. A farm server is a server which contains data on databases. This attribute can contain optional servers for failover, separated by spaces. For cascading chaining, this URL can point to another database link. 6.5.3.1. nsBindMechanism This attribute sets a bind mechanism for the farm server to connect to the remote server. A farm server is a server containing data in one or more databases. This attribute configures the connection type, either standard, TLS, or SASL. empty. This performs simple authentication and requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. EXTERNAL. This uses an TLS certificate to authenticate the farm server to the remote server. Either the farm server URL must be set to the secure URL ( ldaps ) or the nsUseStartTLS attribute must be set to on . Additionally, the remote server must be configured to map the farm server's certificate to its bind identity. DIGEST-MD5. This uses SASL with DIGEST-MD5 encryption. As with simple authentication, this requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. GSSAPI. This uses Kerberos-based authentication over SASL. The farm server must be connected over the standard port, meaning the URL has ldap , because Directory Server does not support SASL/GS-API over TLS. The farm server must be configured with a Kerberos keytab, and the remote server must have a defined SASL mapping for the farm server's bind identity. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values * empty * EXTERNAL * DIGEST-MD5 * GSSAPI Default Value empty Syntax DirectoryString Example nsBindMechanism: GSSAPI 6.5.3.2. nsFarmServerURL This attribute gives the LDAP URL of the remote server. A farm server is a server containing data in one or more databases. This attribute can contain optional servers for failover, separated by spaces. If using cascading changing, this URL can point to another database link. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid remote server LDAP URL Default Value Syntax DirectoryString Example nsFarmServerURL: ldap://farm1.example.com farm2.example.com:389 farm3.example.com:1389/ 6.5.3.3. nshoplimit This attribute specifies the maximum number of times a database is allowed to chain; that is, the number of times a request can be forwarded from one database link to another. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Range 1 to an appropriate upper limit for the deployment Default Value 10 Syntax Integer Example nsHopLimit: 3 6.5.3.4. nsMultiplexorBindDN This attribute gives the DN of the administrative entry used to communicate with the remote server. The multiplexor is the server that contains the database link and communicates with the farm server. This bind DN cannot be the Directory Manager, and, if this attribute is not specified, the database link binds as anonymous . Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Default Value DN of the multiplexor Syntax DirectoryString Example nsMultiplexerBindDN: cn=proxy manager 6.5.3.5. nsMultiplexorCredentials Password for the administrative user, given in plain text. If no password is provided, it means that users can bind as anonymous . The password is encrypted in the configuration file. The example below is what is shown, not what is typed. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid password, which will then be encrypted using the DES reversible password encryption schema Default Value Syntax DirectoryString Example nsMultiplexerCredentials: {DES} 9Eko69APCJfF 6.5.3.6. nsUseStartTLS This attribute sets whether to use Start TLS to initiate a secure, encrypted connection over an insecure port. This attribute can be used if the nsBindMechanism attribute is set to EXTERNAL but the farm server URL set to the standard URL ( ldap ) or if the nsBindMechanism attribute is left empty. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values off | on Default Value off Syntax DirectoryString Example nsUseStartTLS: on 6.5.4. Database link attributes under cn=monitoring,cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Attributes used for monitoring activity on the instances are stored in the cn=monitor,cn= database instance name ,cn=chaining database,cn=plugins,cn=config information tree. 6.5.4.1. nsAbandonCount This attribute gives the number of abandon operations received. 6.5.4.2. nsAddCount This attribute gives the number of add operations received. 6.5.4.3. nsBindCount This attribute gives the number of bind requests received. 6.5.4.4. nsCompareCount This attribute gives the number of compare operations received. 6.5.4.5. nsDeleteCount This attribute gives the number of delete operations received. 6.5.4.6. nsModifyCount This attribute gives the number of modify operations received. 6.5.4.7. nsOpenBindConnectionCount This attribute gives the number of open connections for bind operations. 6.5.4.8. nsOperationConnectionCount This attribute gives the number of open connections for normal operations. 6.5.4.9. nsRenameCount This attribute gives the number of rename operations received. 6.5.4.10. nsSearchBaseCount This attribute gives the number of base level searches received. 6.5.4.11. nsSearchOneLevelCount This attribute gives the number of one-level searches received. 6.5.4.12. nsSearchSubtreeCount This attribute gives the number of subtree searches received. 6.5.4.13. nsUnbindCount This attribute gives the number of unbinds received. 6.6. Referential integrity plug-in attributes Referential Integrity ensures that when you perform update or remove operations to an entry in the the directory, the server also updates information for entries that reference removed/updated one. For example, if a user's entry is removed from the directory and Referential Integrity is enabled, the server also removes the user from any groups where the user is a member. 6.6.1. nsslapd-pluginAllowReplUpdates Referential Integrity can be a very resource demanding procedure. So if you configured multi-supplier replication the Referential Integrity plug-in will ignore replicated updates by default. However, sometimes it is not possible to enable the Referential Integrity plug-in, or the plug-in is not available. For example, one of your suppliers in the replication topology is Active Directory (see chapter Windows Synchronization for more details) that does not support Referential Integrity. In cases like this you can allow the Referential Integrity plug-in on another supplier to process replicated updates using nsslapd-pluginAllowReplUpdates attribute. Important Only one supplier must have the nsslapd-pluginAllowReplUpdates attribute value on in multi-supplier replication topology. Otherwise, it can lead to replication errors, and requires a full initialization to fix the problem. On the other hand, the Referential Integrity plug-in must be enabled on all supplies where possible. Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Range on/off Default Value off Syntax Boolean Example nsslapd-pluginAllowReplUpdates: off
[ "dn: cn=Telephone Syntax,cn=plugins,cn=config objectclass: top objectclass: nsSlapdPlugin objectclass: extensibleObject cn: Telephone Syntax nsslapd-pluginPath: libsyntax-plugin nsslapd-pluginInitfunc: tel_init nsslapd-pluginType: syntax nsslapd-pluginEnabled: on", "dn:cn=ACL Plugin,cn=plugins,cn=config objectclass:top objectclass:nsSlapdPlugin objectclass:extensibleObject", "dn: cn=Account Policy Plugin,cn=plugins,cn=config nsslapd-pluginarg0: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config", "dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: config ... attributes for evaluating accounts alwaysRecordLogin: yes stateattrname: lastLoginTime altstateattrname: createTimestamp ... attributes for account policy entries specattrname: acctPolicySubentry limitattrname: accountInactivityLimit", "dn: cn=AccountPolicy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy 86400 seconds per day * 30 days = 2592000 seconds accountInactivityLimit: 2592000 cn: AccountPolicy", "dn: uid=scarter,ou=people,dc=example,dc=com lastLoginTime: 20060527001051Z acctPolicySubentry: cn=AccountPolicy,dc=example,dc=com", "dn: cn=Barbara Jensen,ou=Engineering,dc=example,dc=com objectClass: top objectClass: alias objectClass: extensibleObject cn: Barbara Jensen aliasedObjectName: cn=Barbara Smith,ou=Engineering,dc=example,dc=com", "dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ipHost autoMemberDefaultGroup: cn=systems,cn=hostgroups,ou=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn", "dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www\\.web[0-9]+\\.example\\.com", "member: uid=jsmith,ou=People,dc=example,dc=com", "dn: cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: PAM Pass Through Auth nsslapd-pluginPath: libpam-passthru-plugin nsslapd-pluginInitfunc: pam_passthruauth_init nsslapd-pluginType: preoperation pass:quotes[ nsslapd-pluginEnabled: on ] nsslapd-pluginLoadGlobal: true nsslapd-plugin-depends-on-type: database nsslapd-pluginId: pam_passthruauth nsslapd-pluginVersion: 9.0.0 nsslapd-pluginVendor: Red Hat nsslapd-pluginDescription: PAM pass through authentication plugin dn: cn=Example PAM Config,cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: Example PAM Config pamMissingSuffix: ALLOW pass:quotes[ pamExcludeSuffix: cn=config ] pass:quotes[ pamIDMapMethod: RDN ou=people,dc=example,dc=com ] pass:quotes[ pamIDMapMethod: ENTRY ou=engineering,dc=example,dc=com ] pass:quotes[ pamIDAttr: customPamUid ] pass:quotes[ pamFilter: (manager=uid=bjensen,ou=people,dc=example,dc=com) ] pamFallback: FALSE pass:quotes[ pamSecure: TRUE ] pass:quotes[ pamService: ldapserver ]", "pamIDMapMethod: RDN pamSecure: FALSE pamService: ldapserver", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: cn=Example PAM config entry,cn=PAM Pass Through Auth,cn=plugins,cn=config changetype: modify add: pamModuleIsThreadSafe pamModuleIsThreadSafe: on", "ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x -b \"cn=Password Storage Schemes,cn=plugins,cn=config\" -s sub \"(objectclass=*)\" dn", "nsslapd-attribute: attribute :pass:attributes[{blank}] alias", "(modifyTimestamp>=20200101010101Z)", "nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40", "nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40", "dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres", "abc*", "*xyz", "ab*z", "dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres", "abc*", "*xyz", "ab*z" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/plug_in_implemented_server_functionality_reference
Chapter 5. Installing Logging
Chapter 5. Installing Logging Red Hat OpenShift Service on AWS Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator's logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs. Important You must install the Red Hat OpenShift Logging Operator after the log store Operator. You deploy logging by installing the Loki Operator or OpenShift Elasticsearch Operator to manage your log store, followed by the Red Hat OpenShift Logging Operator to manage the components of logging. You can use either the Red Hat OpenShift Service on AWS web console or the Red Hat OpenShift Service on AWS CLI to install or configure logging. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Tip You can alternatively apply all example objects. 5.1. Installing Logging with Elasticsearch using the web console You can use the Red Hat OpenShift Service on AWS web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. Elasticsearch is a memory-intensive application. By default, Red Hat OpenShift Service on AWS installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three Red Hat OpenShift Service on AWS nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. Note If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the Red Hat OpenShift Service on AWS web console: Install the OpenShift Elasticsearch Operator: In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Choose OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation Mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Service on AWS metric, which would cause conflicts. Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.y as the Update Channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . Install the Red Hat OpenShift Logging Operator: In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that the Red Hat OpenShift Logging Operator installed by switching to the Operators Installed Operators page. Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded . If the Operator does not appear as installed, to troubleshoot further: Switch to the Operators Installed Operators page and inspect the Status column for any errors or failures. Switch to the Workloads Pods page and check the logs in any pods in the openshift-logging project that are reporting issues. Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: Note This default OpenShift Logging configuration should support a wide array of environments. Review the topics on tuning and configuring logging components for information on modifications you can make to your OpenShift Logging cluster. apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {} 1 The name must be instance . 2 The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to Unmanaged . However, an unmanaged deployment does not receive updates until OpenShift Logging is placed back into a managed state. 3 Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. 4 Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 7d for seven days. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. 5 Specify the number of Elasticsearch nodes. See the note that follows this list. 6 Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage. 7 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 8 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. 9 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see Configuring the log visualizer . 10 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see "Configuring Fluentd". Note The maximum number of master nodes is three. If you specify a nodeCount greater than 3 , Red Hat OpenShift Service on AWS creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if nodeCount=4 , the following nodes are created: USD oc get deployment Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s Click Create . This creates the logging components, the Elasticsearch custom resource and components, and the Kibana interface. Verify the install: Switch to the Workloads Pods page. Select the openshift-logging project. You should see several pods for OpenShift Logging, Elasticsearch, your collector, and Kibana similar to the following list: Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s 5.2. Installing Logging with Elasticsearch using the CLI Elasticsearch is a memory-intensive application. By default, Red Hat OpenShift Service on AWS installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three Red Hat OpenShift Service on AWS nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure Create a Namespace object for the OpenShift Elasticsearch Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Service on AWS metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Namespace object for the Red Hat OpenShift Logging Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object for the OpenShift Elasticsearch Operator: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe a namespace to the OpenShift Elasticsearch Operator: Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-<x.y> as the channel. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM) Apply the subscription by running the following command: USD oc apply -f <filename>.yaml Verify the Operator installation by running the following command: USD oc get csv --all-namespaces Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded Create an OperatorGroup object for the Red Hat OpenShift Logging Operator: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2 1 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. 2 You must specify openshift-logging as the namespace for logging versions 5.7 and earlier. For logging 5.8 and later, you can use any namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the Red Hat OpenShift Logging Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace for logging versions 5.7 and older. For logging 5.8 and later versions, you can use any namespace. 2 Specify stable or stable-x.y as the channel. 3 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the subscription object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging object as a YAML file: Example ClusterLogging object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {} 1 The name must be instance . 2 The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to Unmanaged . However, an unmanaged deployment does not receive updates until OpenShift Logging is placed back into a managed state. 3 Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. 4 Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 7d for seven days. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. 5 Specify the number of Elasticsearch nodes. 6 Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage. 7 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 8 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. 9 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. 10 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. Note The maximum number of master nodes is three. If you specify a nodeCount greater than 3 , Red Hat OpenShift Service on AWS creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. For example, if nodeCount=4 , the following nodes are created: USD oc get deployment Example output cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. 5.3. Installing Logging and the Loki Operator using the CLI To install and configure logging on your Red Hat OpenShift Service on AWS cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the Red Hat OpenShift Service on AWS CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Service on AWS metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 5.4. Installing Logging and the Loki Operator using the web console To install and configure logging on your Red Hat OpenShift Service on AWS cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OperatorHub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the Red Hat OpenShift Service on AWS web console. Procedure In the Red Hat OpenShift Service on AWS web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. Additional resources About the OVN-Kubernetes default Container Network Interface (CNI) network provider
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}", "oc get deployment", "cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cd-tuhduuw-1-f5c885dbf-dlqws 1/1 Running 0 2m4s elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s", "cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: <channel> 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator", "oc apply -f <filename>.yaml", "oc get csv --all-namespaces", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-node-lease elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-public elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded kube-system elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-apiserver elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-authentication elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-controller-manager elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded openshift-cloud-credential-operator elasticsearch-operator.v5.8.3 OpenShift Elasticsearch Operator 5.8.3 elasticsearch-operator.v5.8.2 Succeeded", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging spec: managementState: Managed 2 logStore: type: elasticsearch 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: <storage_class_name> 6 size: 200G resources: 7 limits: memory: 16Gi requests: memory: 16Gi proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: SingleRedundancy visualization: type: kibana 9 kibana: replicas: 1 collection: type: fluentd 10 fluentd: {}", "oc get deployment", "cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/cluster-logging-deploying
Chapter 9. Authenticating as an Active Directory user using PKINIT with a smart card
Chapter 9. Authenticating as an Active Directory user using PKINIT with a smart card Active Directory (AD) users can authenticate with a smart card to a desktop client system joined to IdM and get a Kerberos ticket-granting ticket (TGT). These tickets can be used for single sign-on (SSO) authentication from the client. Prerequisites The IdM server is configured for smart card authentication. For more information, see Configuring the IdM server for smart card authentication or Using Ansible to configure the IdM server for smart card authentication . The client is configured for smart card authentication. For more information, see Configuring the IdM client for smart card authentication or Using Ansible to configure IdM clients for smart card authentication . The krb5-pkinit package is installed. The AD server is configured to trust the certificate authority (CA) that issued the smart card certificate. Import the CA certificates into the NTAuth store (see Microsoft support ) and add the CA as a trusted CA. See Active Directory documentation for details. Procedure Configure the Kerberos client to trust the CA that issued the smart card certificate: On the IdM client, open the /etc/krb5.conf file. Add the following lines to the file: If the user certificates do not contain a certificate revocation list (CRL) distribution point extension, configure AD to ignore revocation errors: Save the following REG-formatted content in a plain text file and import it to the Windows registry: Alternatively, you can set the values manually by using the regedit.exe application. Reboot the Windows system to apply the changes. Authenticate by using the kinit utility on an Identity Management client. Specify the Active Directory user with the user name and domain name: The -X option specifies the opensc-pkcs11.so module as the pre-authentication attribute. Additional resources kinit(1) man page on your system See MIT Kerberos Documentation for /etc/krb5.conf settings.
[ "[realms] AD.DOMAIN.COM = { pkinit_eku_checking = kpServerAuth pkinit_kdc_hostname = adserver.ad.domain.com }", "Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Kdc] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\LSA\\Kerberos\\Parameters] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001", "kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_smart_card_authentication/authenticating-as-an-active-directory-user-using-pkinit-with-a-smart-card_managing-smart-card-authentication
7.167. procps
7.167. procps 7.167.1. RHBA-2015:1407 - procps bug fix and enhancement update Updated procps packages that fix two bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The procps packages contain a set of system utilities that provide system information. The procps packages include the following utilities: ps, free, skill, pkill, pgrep, snice, tload, top, uptime, vmstat, w, watch, and pwdx. Bug Fixes BZ# 1163404 Previously, behavior of the libproc library was unreliable when it was loaded with the dlopen() call after the environment was changed with the setenv() call. As a consequence, an invalid memory access error could occur in libproc. With this update, the find_elf_note() function obtains the auxiliary vector values using a different and safer method based on parsing the /proc/self/auxv file, and the described problem no longer occurs. BZ# 1172059 Prior to this update, the stat2proc() function did not process empty files correctly. Consequently, when an empty stat file was processed, the ps utility could terminate unexpectedly with a segmentation fault. Handling of empty stat files has been fixed, and ps no longer crashes in this scenario. Enhancements BZ# 1120580 This update introduces the new "--system" option to the sysctl utility. This option enables sysctl to process configuration files from a group of system directories. BZ# 993072 The new "-h" option has been added to the "free" utility. The purpose of this option is to show all output fields automatically scaled to the shortest three-digit representation including the unit, making the output conveniently human-readable. BZ# 1123311 The "w" utility now includes the "-i" option to display IP addresses instead of host names in the "FROM" column. Users of procps are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-procps
Chapter 19. Clustering
Chapter 19. Clustering clufter The clufter package, available as a Technology Preview in Red Hat Enterprise Linux 6, provides a tool for transforming and analyzing cluster configuration formats. It can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. Package: clufter-0.56.2-1 luci support for fence_sanlock The luci tool now supports the sanlock fence agent as a Technology Preview. The agent is available in the luci's list of agents. Package: luci-0.26.0-78 Recovering a node using a hardware watchdog device New fence_sanlock agent and checkquorum.wdmd, included in Red Hat Enterprise Linux 6.4 as a Technology Preview, provide new mechanisms to trigger the recovery of a node using a hardware watchdog device. Tutorials on how to enable this Technology Preview will be available at https://fedorahosted.org/cluster/wiki/HomePage Note that SELinux in enforcing mode is currently not supported. Package: cluster-3.0.12.1-78
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/chap-red_hat_enterprise_linux-6.8_technical_notes-technology_previews-clustering
Chapter 11. SecretList [image.openshift.io/v1]
Chapter 11. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object Required items 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets GET : read secrets of the specified ImageStream 11.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets Table 11.1. Global path parameters Parameter Type Description name string name of the SecretList HTTP method GET Description read secrets of the specified ImageStream Table 11.2. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/secretlist-image-openshift-io-v1
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/red_hat_openshift_data_foundation_architecture/providing-feedback-on-red-hat-documentation_rhodf
Chapter 5. Transaction Support
Chapter 5. Transaction Support 5.1. Transaction Support JBoss Data Virtualization uses XA transactions for participating in global transactions and for demarcating its local and command scoped transactions. Refer to the Red Hat JBoss Data Virtualization Development Guide Volume 1: Client Development for more information about the transaction subsystem. Table 5.1. JBoss Data Virtualization Transaction Scopes Scope Description Command Treats the user command as if all source commands are executed within the scope of the same transaction. The AutoCommitTxn execution property controls the behavior of command level transactions. Local The transaction boundary is local defined by a single client session. Global JBoss Data Virtualization participates in a global transaction as an XA Resource. The default transaction isolation level for JBoss Data Virtualization is READ_COMMITTED.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-transaction_support
21.2. Terminology
21.2. Terminology This section explains the terms used throughout this chapter. libguestfs (GUEST FileSystem LIBrary) - the underlying C library that provides the basic functionality for opening disk images, reading and writing files, and so on. You can write C programs directly to this API. guestfish (GUEST Filesystem Interactive SHell) is an interactive shell that you can use from the command line or from shell scripts. It exposes all of the functionality of the libguestfs API. Various virt tools are built on top of libguestfs, and these provide a way to perform specific single tasks from the command line. These tools include virt-df , virt-rescue , virt-resize , and virt-edit . augeas is a library for editing the Linux configuration files. Although this is separate from libguestfs, much of the value of libguestfs comes from the combination with this tool. guestmount is an interface between libguestfs and FUSE. It is primarily used to mount file systems from disk images on your host physical machine. This functionality is not necessary, but can be useful.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-terminology
Chapter 216. Lumberjack Component
Chapter 216. Lumberjack Component Available as of Camel version 2.18 The Lumberjack component retrieves logs sent over the network using the Lumberjack protocol, from Filebeat for instance. The network communication can be secured with SSL. This component only supports consumer endpoints. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-lumberjack</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 216.1. URI format lumberjack:host lumberjack:host:port You can append query options to the URI in the following format, ?option=value&option=value&... 216.2. Options The Lumberjack component supports 3 options, which are listed below. Name Description Default Type sslContextParameters (security) Sets the default SSL configuration to use for all the endpoints. You can also configure it directly at the endpoint level. SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Lumberjack endpoint is configured using URI syntax: with the following path and query parameters: 216.2.1. Path Parameters (2 parameters): Name Description Default Type host Required Network interface on which to listen for Lumberjack String port Network port on which to listen for Lumberjack 5044 int 216.2.2. Query Parameters (5 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sslContextParameters (consumer) SSL configuration SSLContextParameters exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 216.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.lumberjack.enabled Enable lumberjack component true Boolean camel.component.lumberjack.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.lumberjack.ssl-context-parameters Sets the default SSL configuration to use for all the endpoints. You can also configure it directly at the endpoint level. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.lumberjack.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 216.4. Result The result body is a Map<String, Object> object. 216.5. Lumberjack Usage Samples 216.5.1. Example 1: Streaming the log messages RouteBuilder builder = new RouteBuilder() { public void configure() { from("lumberjack:0.0.0.0"). // Listen on all network interfaces using the default port setBody(simple("USD{body[message]}")). // Select only the log message to("stream:out"); // Write it into the output stream } };
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-lumberjack</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "lumberjack:host lumberjack:host:port", "lumberjack:host:port", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"lumberjack:0.0.0.0\"). // Listen on all network interfaces using the default port setBody(simple(\"USD{body[message]}\")). // Select only the log message to(\"stream:out\"); // Write it into the output stream } };" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/lumberjack-component
14.9. Running Self-Tests
14.9. Running Self-Tests The Certificate System has the added functionality to allow self-tests of the server. The self-tests are run at start up and can also be run on demand. The startup self-tests run when the server starts and keep the server from starting if a critical self-test fails. The on-demand self-tests are run by clicking the self-tests button in the subsystem console. 14.9.1. Running Self-Tests The on-demand self-test for the CA, OCSP, KRA, or TKS subsystems are run from the console. The on-demand self-tests for the TPS system are run from the web services page. 14.9.1.1. Running Self-Tests from the Console Note pkiconsole is being deprecated. Log into the Console. Select the subsystem name at the top of the left pane. Select the Self Tests tab. Click Run . The self-tests that are configured for the subsystem will run. If any critical self-tests fail, the server will stop. The On-Demand Self Tests Results window appears, showing the logged events for this run of the self-tests. 14.9.1.2. Running TPS Self-Tests To run TPS self-tests from the command-line interface (CLI): pki tps-selftest-find pki tps-selftest-run pki tps-selftest-show 14.9.2. Self-Test Logging A separate log, selftest.log , is added to the log directory that contains reports for both the start up self-tests and the on-demand self-tests. This log is configured by changing the setting for the log in the CS.cfg file. See the Modifying Self-Test Configuration section in the Red Hat Certificate System Planning, Installation, and Deployment Guide for details. 14.9.3. Configuring POSIX System ACLs POSIX system access control rules provide finer granularity over system user permissions. These ACLs must be set for each instance after it is fully configured. For more details on ACLs, see the corresponding chapter in the Red Hat Enterprise Linux System Administration Guide . 14.9.3.1. Setting POSIX System ACLs for the CA, KRA, OCSP, TKS, and TPS Modern file systems like ext4 and XFS enable ACLs by default, and are most likely used on modern Red Hat Enterprise Linux installations. Stop the instance. Set the group readability to the pkiadmin group for the instance's directories and files. Apply execute (x) ACL permissions on all directories: Remove group readability for the pkiadmin group from the instance's signedAudit/ directory and its associated files: Set group readability for the pkiaudit group for the instance's signedAudit/ directory and its associated files: Re-apply execute (x) ACL permissions on the signedAudit/ directory and all of its subdirectories: Start the instance. Confirm that the file access controls were properly applied by using the getfacl command to show the current ACL settings:
[ "pkiconsole https://server.example.com: admin_port/subsystem_type", "pki-server stop instance_name", "setfacl -R -L -m g:pkiadmin:r,d:g:pkiadmin:r /var/lib/pki/ instance_name", "find -L /var/lib/pki/ instance_name -type d -exec setfacl -L -n -m g:pkiadmin:rx,d:g:pkiadmin:rx {} \\;", "setfacl -R -L -x g:pkiadmin,d:g:pkiadmin /var/lib/pki/ instance_name /logs/signedAudit", "setfacl -R -L -m g:pkiaudit:r,d:g:pkiaudit:r /var/lib/pki/ instance_name /logs/signedAudit", "find -L /var/lib/pki/ instance_name /logs/signedAudit -type d -exec setfacl -L -n -m g:pkiaudit:rx,d:g:pkiaudit:rx {} \\;", "pki-server start instance_name", "getfacl /var/lib/pki/ instance_name /var/lib/pki/ instance_name / subsystem_type /logs/signedAudit/ getfacl: Removing leading '/' from absolute path names file: var/lib/pki/ instance_name owner: pkiuser group: pkiuser user::rwx group::rwx group:pkiadmin:r-x mask::rwx other::r-x default:user::rwx default:group::rwx default:group:pkiadmin:r-x default:mask::rwx default:other::r-x file: var/lib/pki/ instance_name /logs/signedAudit owner: pkiuser group: pkiaudit user::rwx group::rwx group:pkiaudit:r-x mask::rwx other::--- default:user::rwx default:group::rwx default:group:pkiaudit:r-x default:mask::rwx default:other::---" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/self_tests
CI/CD overview
CI/CD overview OpenShift Container Platform 4.14 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cicd_overview/index
Deploying OpenShift Data Foundation using IBM Z
Deploying OpenShift Data Foundation using IBM Z Red Hat OpenShift Data Foundation 4.17 Instructions on deploying Red Hat OpenShift Data Foundation to use local storage on IBM Z Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on IBM Z. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/index
Appendix B. Revision history
Appendix B. Revision history 0.4-9 Tue March 18 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHEL-82566 (Installer) 0.4-8 Mon February 24 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHELDOCS-19626 (Security) 0.4-7 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHELDOCS-19603 (IdM SSSD) 0.4-6 Mon Jan 20 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-13837 (Installer) 0.4-5 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.4-4 Tue Nov 19 2024, Gabi Fialova ( [email protected] ) Removed a Known Issue BZ-2057471 (IdM) Updated a Known Issue BZ#2103327 (IdM) 0.4-3 Thu Oct 03 2024, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-56135 (Installer). 0.4-2 Tue Sep 03 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue RHELDOCS-17878 (Installer). 0.4-1 Thu Aug 22 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue RHELDOCS-18764 (Installer). 0.4-0 Thu Jul 18 2024, Gabriela Fialova ( [email protected] ) Updated the abstract in the Deprecated functionalities section 0.3-9 Tue Jul 02 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue RHEL-34154 (Installer). 0.3-8 Tue Jun 11 2024, Brian Angelica ( [email protected] ) Add Deprecated Functionality RHELDOCS-18049 (Shells and command-line tools). 0.3-7 Tue Jun 11 2024, Brian Angelica ( [email protected] ) Added an Known Issue RHEL-24847 (Shells and command-line tools). 0.3-6 Thu May 16 2024, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-10019 (Virtualization). 0.3-5 Thu Apr 25 2024, Gabriela Fialova ( [email protected] ) Added an Enhancement BZ#2136610 (Identity Management). 0.3-4 Thu Apr 18 2024, Gabriela Fialova ( [email protected] ) Added an Enhancement RHEL-19142 (Networking). 0.3-3 Thu Mar 14 2024, Gabriela Fialova ( [email protected] ) Added an Enhancement RHEL-18359 (Kernel). Added a Known Issue RHEL-25967 (Kernel). 0.3-2 Mon Mar 04 2024, Gabriela Fialova ( [email protected] ) Added a Bug Fix Jira:SSSD-6096 (Identity Management). 0.3.1 Thu Feb 1 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue BZ#1834716 (Security). 0.3-0 Fri Jan 12 2024, Marc Muehlfeld ( [email protected] ) Added a Known Issue Jira:RHEL-6496 (Networking). 0.2-9 Tue Dec 12 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview BZ#2162677 (IdM). 0.2-8 Thu Dec 7 2023, Lucie Varakova ( [email protected] ) Added a new feature BZ#2044200 (Kernel). 0.2-7 Mon Nov 20 2023, Gabriela Fialova ( [email protected] ) Added an Enhancement BZ#2165827 (Identity Management). 0.2-6 Mon Nov 13 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview JIRA:RHELDOCS-17040 (Virtualization) 0.2-5 Fri Nov 10 2023, Gabriela Fialova ( [email protected] ) Updated the module on Providing Feedback on RHEL Documentation. 0.2-4 Fri Nov 10 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview JIRA:RHELDOCS-17050 (Virtualization). 0.2-3 Thu Nov 2 2023, Gabriela Fialova ( [email protected] ) Updated doc text in BZ#2125371 (Networking). 0.2-2 Fri Oct 13 2023, Gabriela Fialova ( [email protected] ) Added a Tech Preview JIRA:RHELDOCS-16861 (Containers). 0.2-1 Sep 25 2023, Gabriela Fialova ( [email protected] ) Added a known issue BZ#2122636 (Desktop). 0.2-0 Sep 13 2023, Lenka Spackova ( [email protected] ) Fixed command formatting in BZ#2220915 . 0.1-9 Sep 8 2023, Marc Muehlfeld ( [email protected] ) Added a deprecated functionality release note JIRA:RHELDOCS-16612 (Samba). Updated the Providing feedback on Red Hat documentation section. 0.1-8 Sep 5 2023, Gabriela Fialova ( [email protected] ) Added an enhancement BZ#2075017 (idm_ds). 0.1-7 Aug 31 2023, Gabriela Fialova ( [email protected] ) Added a known issue BZ#2230431 (plumbers). 0.1-6 Aug 29 2023, Gabriela Fialova ( [email protected] ) Added a known issue BZ#2220915 (IdM). 0.1-5 Aug 25 2023, Lucie Varakova ( [email protected] ) Added a known issue BZ#2214508 (Kernel). 0.1.4 Aug 17 2023, Gabriela Fialova ( [email protected] ) Add an enhancement BZ#2136937 (Plumbers). 0.1.3 Aug 14 2023, Lenka Spackova ( [email protected] ) Fixed a typo in BZ#2128410 . 0.1.2 Aug 09 2023, Gabriela Fialova ( [email protected] ) Updated a Security Bug Fix BZ#2155910 (CS). 0.1.1 Aug 07 2023, Gabriela Fialova ( [email protected] ) Updated a deprecated functionality release note BZ#2214130 (CS). 0.1.0 Aug 03 2023, Lenka Spackova ( [email protected] ) Fixed formatting in BZ#2142639 and BZ#2119102 . Improved abstract. 0.0.9 Aug 02 2023, Marc Muehlfeld ( [email protected] ) Updated a deprecated functionality release note BZ#1894877 (NetworkManager). 0.0.8 Aug 1 2023, Mirek Jahoda ( [email protected] ) Replaced the web console known issue with NBDE by a bug fix BZ#2207498 (RHEL web console). 0.0.7 Jul 27 2023, Gabriela Fialova ( [email protected] ) Amended 3 enhancements in kernel and 1 in compilers and dev tools as per DDF feedback. 0.0.6 Jul 25 2023, Gabriela Fialova ( [email protected] ) Added a Known Issue BZ#2109231 (Installer). 0.0.5 Jun 22 2023, Gabriela Fialova ( [email protected] ) Added an Enhancement BZ#2087247 (IdM). 0.0.4 Jun 8 2023, Gabriela Fialova ( [email protected] ) Added an Enhancement BZ#2190123 (kernel). 0.0.3 Jun 6 2023, Gabriela Fialova ( [email protected] ) Added RHELPLAN-159146 (IdM). 0.0.2 Jun 5 2023, Gabriela Fialova ( [email protected] ) Added a KI BZ#2176010 (virt). 0.0.1 May 10 2023, Gabriela Fialova ( [email protected] ) Release of the Red Hat Enterprise Linux 9.2 Release Notes. 0.0.0 Mar 29 2023, Gabriela Fialova ( [email protected] ) Release of the Red Hat Enterprise Linux 9.2 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/revision_history
Chapter 1. About Metering
Chapter 1. About Metering Important Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 1.1. Metering overview Metering is a general purpose data analysis tool that enables you to write reports to process data from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. Metering focuses primarily on in-cluster metric data using Prometheus as a default data source, enabling users of metering to do reporting on pods, namespaces, and most other Kubernetes resources. You can install metering on OpenShift Container Platform 4.x clusters and above. 1.1.1. Installing metering You can install metering using the CLI and the web console on OpenShift Container Platform 4.x and above. To learn more, see installing metering . 1.1.2. Upgrading metering You can upgrade metering by updating the Metering Operator subscription. Review the following tasks: The MeteringConfig custom resource specifies all the configuration details for your metering installation . When you first install the metering stack, a default MeteringConfig custom resource is generated. Use the examples in the documentation to modify this default file. A report custom resource provides a method to manage periodic Extract Transform and Load (ETL) jobs using SQL queries. Reports are composed from other metering resources, such as ReportQuery resources that provide the actual SQL query to run, and ReportDataSource resources that define the data available to the ReportQuery and Report resources. 1.1.3. Using metering You can use metering for writing reports and viewing report results. To learn more, see examples of using metering . 1.1.4. Troubleshooting metering You can use the following sections to troubleshoot specific issues with metering . Not enough compute resources StorageClass resource not configured Secret not configured correctly 1.1.5. Debugging metering You can use the following sections to debug specific issues with metering . Get reporting Operator logs Query Presto using presto-cli Query Hive using beeline Port-forward to the Hive web UI Port-forward to HDFS Metering Ansible Operator 1.1.6. Uninstalling metering You can remove and clean metering resources from your OpenShift Container Platform cluster. To learn more, see uninstalling metering . 1.1.7. Metering resources Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. Metering is managed using the following custom resource definitions (CRDs): MeteringConfig Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. Report Controls what query to use, when, and how often the query should be run, and where to store the results. ReportQuery Contains the SQL queries used to perform analysis on the data contained within ReportDataSource resources. ReportDataSource Controls the data available to ReportQuery and Report resources. Allows configuring access to different databases for use within metering.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/metering/about-metering
Chapter 3. Working with host groups
Chapter 3. Working with host groups A host group acts as a template for common host settings. Instead of defining the settings individually for each host, use host groups to define common settings once and apply them to multiple hosts. 3.1. Host group settings and nested host groups A host group can define many settings for hosts, such as lifecycle environment, content view, or Ansible roles that are available to the hosts. Important When you change the settings of an existing host group, the new settings do not propagate to the hosts assigned to the host group. Only Puppet class settings get updated on hosts after you change them in the host group. You can create a hierarchy of host groups. Aim to have one base level host group that represents all hosts in your organization and provides general settings, and then nested groups that provide specific settings. Satellite applies host settings in the following order when nesting host groups: Host settings take priority over host group settings. Nested host group settings take priority over parent host group settings. Example 3.1. Nested host group hierarchy You create a top-level host group named Base and two nested host groups named Webserver and Storage . The nested host groups are associated with multiple hosts. You also create host custom.example.com that is not associated with any host group. You define the operating system on the top-level host group ( Base ) and Ansible roles on the nested host groups ( Webservers and Storage ). Top-level host group Nested host group Hosts Settings inherited from host groups Base This host group applies the Red Hat Enterprise Linux 8.8 operating system setting. Webservers This host group applies the linux-system-roles.selinux Ansible role. webserver1.example.com Hosts use the following settings: Red Hat Enterprise Linux 8.8 defined by host group Base linux-system-roles.selinux defined by host group Webservers webserver2.example.com Storage This host group applies the linux-system-roles.postfix Ansible role. storage1.example.com Hosts use the following settings: Red Hat Enterprise Linux 8.8 defined by host group Base linux-system-roles.postfix defined by host group Storage storage2.example.com [No host group] custom.example.com No settings inherited from host groups. Example 3.2. Nested host group settings You create a top-level host group named Base and two nested host groups named Webserver and Storage . You also create host custom.example.com that is associated with the top-level host group Base , but no nested host group. You define different values for the operating system and Ansible role settings on the top-level host group ( Base ) and nested host groups ( Webserver and Storage ). Top-level host group Nested host group Host Settings inherited from host groups Base This host group applies these settings: The Red Hat Enterprise Linux 8.8 operating system The linux-system-roles.selinux Ansible role Webservers This host group applies these settings: The Red Hat Enterprise Linux 8.9 operating system No Ansible role webserver1.example.com Hosts use the following settings: The Red Hat Enterprise Linux 8.9 operating system from host group Webservers The linux-system-roles.selinux Ansible role from host group Base webserver2.example.com Storage This host group applies these settings: No operating system The linux-system-roles.postfix Ansible role storage1.example.com Hosts use the following settings: The Red Hat Enterprise Linux 8.8 operating system from host group Base The linux-system-roles.postfix Ansible role from host group Storage storage2.example.com [No nested host group] custom.example.com Host uses the following settings: The Red Hat Enterprise Linux 8.8 operating system from host group Base The linux-system-roles.selinux Ansible role from host group Base 3.2. Creating a host group Create a host group to be able to apply host settings to multiple hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Configure > Host Groups and click Create Host Group . If you have an existing host group that you want to inherit attributes from, you can select a host group from the Parent list. If you do not, leave this field blank. Enter a Name for the new host group. Enter any further information that you want future hosts to inherit. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove. Click the additional tabs and add any details that you want to attribute to the host group. Note Puppet fails to retrieve the Puppet CA certificate while registering a host with a host group associated with a Puppet environment created inside a Production environment. To create a suitable Puppet environment to be associated with a host group, manually create a directory: Click Submit to save the host group. CLI procedure Create the host group with the hammer hostgroup create command. For example: 3.3. Creating a host group for each lifecycle environment Use this procedure to create a host group for the Library lifecycle environment and add nested host groups for other lifecycle environments. Procedure To create a host group for each lifecycle environment, run the following Bash script: MAJOR=" My_Major_OS_Version " ARCH=" My_Architecture " ORG=" My_Organization " LOCATIONS=" My_Location " PTABLE_NAME=" My_Partition_Table " DOMAIN=" My_Domain " hammer --output csv --no-headers lifecycle-environment list --organization "USD{ORG}" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == "Library" ]] && continue hammer hostgroup create --name "rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}" \ --architecture "USD{ARCH}" \ --partition-table "USD{PTABLE_NAME}" \ --domain "USD{DOMAIN}" \ --organizations "USD{ORG}" \ --query-organization "USD{ORG}" \ --locations "USD{LOCATIONS}" \ --lifecycle-environment "USD{LC_ENV}" done 3.4. Adding a host to a host group You can add a host to a host group in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Select the host group from the Host Group list. Click Submit . Verification The Details card under the Overview tab now shows the host group your host belongs to. 3.5. Changing the host group of a host Use this procedure to change the Host Group of a host. If you reprovision a host after changing the host group, the fresh values that the host inherits from the host group will be applied. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click the name of the host you want to modify. Click Edit . Select the new host group from the Host Group list. Click Submit . Verification The Details card under the Overview tab now shows the host group your host belongs to.
[ "mkdir /etc/puppetlabs/code/environments/ example_environment", "hammer hostgroup create --name \"Base\" --architecture \"My_Architecture\" --content-source-id _My_Content_Source_ID_ --content-view \"_My_Content_View_\" --domain \"_My_Domain_\" --lifecycle-environment \"_My_Lifecycle_Environment_\" --locations \"_My_Location_\" --medium-id _My_Installation_Medium_ID_ --operatingsystem \"_My_Operating_System_\" --organizations \"_My_Organization_\" --partition-table \"_My_Partition_Table_\" --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ --puppet-environment \"_My_Puppet_Environment_\" --puppet-proxy-id _My_Puppet_Proxy_ID_ --root-pass \"My_Password\" --subnet \"_My_Subnet_\"", "MAJOR=\" My_Major_OS_Version \" ARCH=\" My_Architecture \" ORG=\" My_Organization \" LOCATIONS=\" My_Location \" PTABLE_NAME=\" My_Partition_Table \" DOMAIN=\" My_Domain \" hammer --output csv --no-headers lifecycle-environment list --organization \"USD{ORG}\" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == \"Library\" ]] && continue hammer hostgroup create --name \"rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}\" --architecture \"USD{ARCH}\" --partition-table \"USD{PTABLE_NAME}\" --domain \"USD{DOMAIN}\" --organizations \"USD{ORG}\" --query-organization \"USD{ORG}\" --locations \"USD{LOCATIONS}\" --lifecycle-environment \"USD{LC_ENV}\" done" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/working_with_host_groups_managing-hosts
Chapter 11. Container Images Based on Red Hat Software Collections 3.0
Chapter 11. Container Images Based on Red Hat Software Collections 3.0 Component Description Supported architectures Application Images rhscl/nodejs-8-rhel7 Node.js 8 platform for building and running applications (EOL) x86_64, s390x, ppc64le rhscl/php-71-rhel7 PHP 7.1 platform for building and running applications (EOL) x86_64 rhscl/python-36-rhel7 Python 3.6 platform for building and running applications (EOL) x86_64, s390x, ppc64le Daemon Images rhscl/nginx-112-rhel7 nginx 1.12 server and a reverse proxy server (EOL) x86_64, s390x, ppc64le Database Images rhscl/mariadb-102-rhel7 MariaDB 10.2 SQL database server (EOL) x86_64 rhscl/mongodb-34-rhel7 MongoDB 3.4 NoSQL database server (EOL) x86_64 rhscl/postgresql-96-rhel7 PostgreSQL 9.6 SQL database server (EOL) x86_64 Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.0, see the Red Hat Software Collections 3.0 Release Notes . For more information about the Red Hat Developer Toolset 7.0 components, see the Red Hat Developer Toolset 7 User Guide . For information regarding container images based on Red Hat Software Collections 2, see the Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/RHSCL_3.0_images
Chapter 58. Creating guided rule templates
Chapter 58. Creating guided rule templates You can use guided rule templates to define rule structures with placeholder values (template keys) that correspond to actual values defined in a data table. Guided rule templates are an efficient alternative to defining sets of many guided rules individually that use the same structure. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule Template . Enter an informative Guided Rule Template name and select the appropriate Package . The package that you specify must be the same package where the required data objects have been assigned or will be assigned. Click Ok to create the rule template. The new guided rule template is now listed in the Guided Rule Templates panel of the Project Explorer . Click the Data Objects tab and confirm that all data objects required for your rules are listed. If not, click New item to import data objects from other packages, or create data objects within your package. After all data objects are in place, return to the Model tab and use the buttons on the right side of the window to add and define the WHEN (condition) and THEN (action) sections of the rule template, based on the available data objects. For the field values that vary per rule, use template keys in the format USDkey in the rule designer or in the format @{key} in free form DRL (if used). Figure 58.1. Sample guided rule template Note on template keys Template keys are fundamental in guided rule templates. Template keys are what enable field values in the templates to be interchanged with actual values that you define in the corresponding data table to generate different rules from the same template. You can use other value types, such as Literal or Formula , for values that are part of the rule structure of all rules based on that template. However, for any values that differ among the rules, use the Template key field type with a specified key. Without template keys in a guided rule template, the corresponding data table is not generated in the template designer and the template essentially functions as an individual guided rule. The WHEN part of the rule template is the condition that must be met to execute an action. For example, if a telecommunications company charges customers based on the services they subscribe to (Internet, phone, and TV), then one of the WHEN conditions would be internetService | equal to | USDhasInternetService . The template key USDhasInternetService is interchanged with an actual Boolean value ( true or false ) defined in the data table for the template. The THEN part of the rule template is the action to be performed when the conditional part of the rule has been met. For example, if a customer subscribes to only Internet service, a THEN action for RecurringPayment with a template key USDamount would set the actual monthly amount to the integer value defined for Internet service charges in the data table. After you define all components of the rule, click Save in the guided rule templates designer to save your work. 58.1. Adding WHEN conditions in guided rule templates The WHEN part of the rule contains the conditions that must be met to execute an action. For example, if a telecommunications company charges customers based on the services they subscribe to (Internet, phone, and TV), then one of the WHEN conditions would be internetService | equal to | USDhasInternetService . The template key USDhasInternetService is interchanged with an actual Boolean value ( true or false ) defined in the data table for the template. Prerequisites All data objects required for your rules have been created or imported and are listed in the Data Objects tab of the guided rule templates designer. Procedure In the guided rule templates designer, click the plus icon ( ) on the right side of the WHEN section. The Add a condition to the rule window with the available condition elements opens. Figure 58.2. Add a condition to the rule The list includes the data objects from the Data Objects tab of the guided rule templates designer, any DSL objects defined for the package, and the following standard options: The following does not exist: Use this to specify facts and constraints that must not exist. The following exists: Use this to specify facts and constraints that must exist. This option is triggered on only the first match, not subsequent matches. Any of the following are true: Use this to list any facts or constraints that must be true. From: Use this to define a From conditional element for the rule. From Accumulate: Use this to define an Accumulate conditional element for the rule. From Collect: Use this to define a Collect conditional element for the rule. From Entry Point: Use this to define an Entry Point for the pattern. Free form DRL: Use this to insert a free-form DRL field where you can define condition elements freely, without the guided rules designer. For template keys in free form DRL, use the format @{key} . Choose a condition element (for example, Customer ) and click Ok . Click the condition element in the guided rule templates designer and use the Modify constraints for Customer window to add a restriction on a field, apply multiple field constraints, add a new formula style expression, apply an expression editor, or set a variable name. Figure 58.3. Modify a condition Note A variable name enables you to identify a fact or field in other constructs within the guided rule. For example, you could set the variable of Customer to c and then reference c in a separate Applicant constraint that specifies that the Customer is the Applicant . c : Customer() Applicant( this == c ) After you select a constraint, the window closes automatically. Choose an operator for the restriction (for example, equal to ) from the drop-down menu to the added restriction. Click the edit icon ( ) to define the field value. Select Template key and add a template key in the format USDkey if this value varies among the rules that are based on this template. This allows the field value to be interchanged with actual values that you define in the corresponding data table to generate different rules from the same template. For field values that do not vary among the rules and are part of the rule template, you can use any other value type. To apply multiple field constraints, click the condition and in the Modify constraints for Customer window, select All of(And) or Any of(Or) from the Multiple field constraint drop-down menu. Figure 58.4. Add multiple field constraints Click the constraint in the guided rule templates designer and further define the field values. After you define all condition elements, click Save in the guided rule templates designer to save your work. 58.2. Adding THEN actions in guided rule templates The THEN part of the rule template is the action to be performed when the conditional part of the rule has been met. For example, if a customer subscribes to only Internet service, a THEN action for RecurringPayment with a template key USDamount would set the actual monthly amount to the integer value defined for Internet service charges in the data table. Prerequisites All data objects required for your rules have been created or imported and are listed in the Data Objects tab of the guided rule templates designer. Procedure In the guided rule templates designer, click the plus icon ( ) on the right side of the THEN section. The Add a new action window with the available action elements opens. Figure 58.5. Add a new action to the rule The list includes insertion and modification options based on the data objects in the Data Objects tab of the guided rule templates designer, and on any DSL objects defined for the package: Insert fact: Use this to insert a fact and define resulting fields and values for the fact. Logically Insert fact: Use this to insert a fact logically into the decision engine and define resulting fields and values for the fact. The decision engine is responsible for logical decisions on insertions and retractions of facts. After regular or stated insertions, facts have to be retracted explicitly. After logical insertions, facts are automatically retracted when the conditions that originally asserted the facts are no longer true. Add free form DRL: Use this to insert a free-form DRL field where you can define condition elements freely, without the guided rules designer. For template keys in free form DRL, use the format @{key} . Choose an action element (for example, Logically Insert fact RecurringPayment ) and click Ok . Click the action element in the guided rule templates designer and use the Add a field window to select a field. Figure 58.6. Add a field After you select a field, the window closes automatically. Click the edit icon ( ) to define the field value. Select Template key and add a template key in the format USDkey if this value varies among the rules that are based on this template. This allows the field value to be interchanged with actual values that you define in the corresponding data table to generate different rules from the same template. For field values that do not vary among the rules and are part of the rule template, you can use any other value type. After you define all action elements, click Save in the guided rule templates designer to save your work. 58.3. Defining enumerations for drop-down lists in rule assets Enumeration definitions in Business Central determine the possible values of fields for conditions or actions in guided rules, guided rule templates, and guided decision tables. An enumeration definition contains a fact.field mapping to a list of supported values that are displayed as a drop-down list in the relevant field of a rule asset. When a user selects a field that is based on the same fact and field as the enumeration definition, the drop-down list of defined values is displayed. You can define enumerations in Business Central or in the DRL source for your Red Hat Decision Manager project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Enumeration . Enter an informative Enumeration name and select the appropriate Package . The package that you specify must be the same package where the required data objects and relevant rule assets have been assigned or will be assigned. Click Ok to create the enumeration. The new enumeration is now listed in the Enumeration Definitions panel of the Project Explorer . In the Model tab of the enumerations designer, click Add enum and define the following values for the enumeration: Fact : Specify an existing data object within the same package of your project with which you want to associate this enumeration. Open the Data Objects panel in the Project Explorer to view the available data objects, or create the relevant data object as a new asset if needed. Field : Specify an existing field identifier that you defined as part of the data object that you selected for the Fact . Open the Data Objects panel in the Project Explorer to select the relevant data object and view the list of available Identifier options. You can create the relevant identifier for the data object if needed. Context : Specify a list of values in the format ['string1','string2','string3'] or [integer1,integer2,integer3] that you want to map to the Fact and Field definitions. These values will be displayed as a drop-down list for the relevant field of the rule asset. For example, the following enumeration defines the drop-down values for applicant credit rating in a loan application decision service: Figure 58.7. Example enumeration for applicant credit rating in Business Central Example enumeration for applicant credit rating in the DRL source In this example, for any guided rule, guided rule template, or guided decision table that is in the same package of the project and that uses the Applicant data object and the creditRating field, the configured values are available as drop-down options: Figure 58.8. Example enumeration drop-down options in a guided rule or guided rule template Figure 58.9. Example enumeration drop-down options in a guided decision table 58.3.1. Advanced enumeration options for rule assets For advanced use cases with enumeration definitions in your Red Hat Decision Manager project, consider the following extended options for defining enumerations: Mapping between DRL values and values in Business Central If you want the enumeration values to appear differently or more completely in the Business Central interface than they appear in the DRL source, use a mapping in the format 'fact.field' : ['sourceValue1=UIValue1','sourceValue2=UIValue2', ... ] for your enumeration definition values. For example, in the following enumeration definition for loan status, the options A or D are used in the DRL file but the options Approved or Declined are displayed in Business Central: Enumeration value dependencies If you want the selected value in one drop-down list to determine the available options in a subsequent drop-down list, use the format 'fact.fieldB[fieldA=value1]' : ['value2', 'value3', ... ] for your enumeration definition. For example, in the following enumeration definition for insurance policies, the policyType field accepts the values Home or Car . The type of policy that the user selects determines the policy coverage field options that are then available: Note Enumeration dependencies are not applied across rule conditions and actions. For example, in this insurance policy use case, the selected policy in the rule condition does not determine the available coverage options in the rule actions, if applicable. External data sources in enumerations If you want to retrieve a list of enumeration values from an external data source instead of defining the values directly in the enumeration definition, on the class path of your project, add a helper class that returns a java.util.List list of strings. In the enumeration definition, instead of specifying a list of values, identify the helper class that you configured to retrieve the values externally. For example, in the following enumeration definition for loan applicant region, instead of defining applicant regions explicitly in the format 'Applicant.region' : ['country1', 'country2', ... ] , the enumeration uses a helper class that returns the list of values defined externally: In this example, a DataHelper class contains a getListOfRegions() method that returns a list of strings. The enumerations are loaded in the drop-down list for the relevant field in the rule asset. You can also load dependent enumeration definitions dynamically from a helper class by identifying the dependent field as usual and enclosing the call to the helper class within quotation marks: If you want to load all enumeration data entirely from an external data source, such as a relational database, you can implement a Java class that returns a Map<String, List<String>> map. The key of the map is the fact.field mapping and the value is a java.util.List<String> list of values. For example, the following Java class defines loan applicant regions for the related enumeration: public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add("AU"); d.add("DE"); d.add("ES"); d.add("UK"); d.add("US"); ... data.put("Applicant.region", d); return data; } } The following enumeration definition correlates to this example Java class. The enumeration contains no references to fact or field names because they are defined in the Java class: The = operator enables Business Central to load all enumeration data from the helper class. The helper methods are statically evaluated when the enumeration definition is requested for use in an editor. Note Defining an enumeration without a fact and field definition is currently not supported in Business Central. To define the enumeration for the associated Java class in this way, use the DRL source in your Red Hat Decision Manager project. 58.4. Adding other rule options You can also use the rule designer to add metadata within a rule, define additional rule attributes (such as salience and no-loop ), and freeze areas of the rule to restrict modifications to conditions or actions. Procedure In the rule designer, click (show options... ) under the THEN section. Click the plus icon ( ) on the right side of the window to add options. Select an option to be added to the rule: Metadata: Enter a metadata label and click the plus icon ( ). Then enter any needed data in the field provided in the rule designer. Attribute: Select from the list of rule attributes. Then further define the value in the field or option displayed in the rule designer. Freeze areas for editing: Select Conditions or Actions to restrict the area from being modified in the rule designer. Figure 58.10. Rule options Click Save in the rule designer to save your work. 58.4.1. Rule attributes Rule attributes are additional specifications that you can add to business rules to modify rule behavior. The following table lists the names and supported values of the attributes that you can assign to rules: Table 58.1. Rule attributes Attribute Value salience An integer defining the priority of the rule. Rules with a higher salience value are given higher priority when ordered in the activation queue. Example: salience 10 enabled A Boolean value. When the option is selected, the rule is enabled. When the option is not selected, the rule is disabled. Example: enabled true date-effective A string containing a date and time definition. The rule can be activated only if the current date and time is after a date-effective attribute. Example: date-effective "4-Sep-2018" date-expires A string containing a date and time definition. The rule cannot be activated if the current date and time is after the date-expires attribute. Example: date-expires "4-Oct-2018" no-loop A Boolean value. When the option is selected, the rule cannot be reactivated (looped) if a consequence of the rule re-triggers a previously met condition. When the condition is not selected, the rule can be looped in these circumstances. Example: no-loop true agenda-group A string identifying an agenda group to which you want to assign the rule. Agenda groups allow you to partition the agenda to provide more execution control over groups of rules. Only rules in an agenda group that has acquired a focus are able to be activated. Example: agenda-group "GroupName" activation-group A string identifying an activation (or XOR) group to which you want to assign the rule. In activation groups, only one rule can be activated. The first rule to fire will cancel all pending activations of all rules in the activation group. Example: activation-group "GroupName" duration A long integer value defining the duration of time in milliseconds after which the rule can be activated, if the rule conditions are still met. Example: duration 10000 timer A string identifying either int (interval) or cron timer definitions for scheduling the rule. Example: timer ( cron:* 0/15 * * * ? ) (every 15 minutes) calendar A Quartz calendar definition for scheduling the rule. Example: calendars "* * 0-7,18-23 ? * *" (exclude non-business hours) auto-focus A Boolean value, applicable only to rules within agenda groups. When the option is selected, the time the rule is activated, a focus is automatically given to the agenda group to which the rule is assigned. Example: auto-focus true lock-on-active A Boolean value, applicable only to rules within rule flow groups or agenda groups. When the option is selected, the time the ruleflow group for the rule becomes active or the agenda group for the rule receives a focus, the rule cannot be activated again until the ruleflow group is no longer active or the agenda group loses the focus. This is a stronger version of the no-loop attribute, because the activation of a matching rule is discarded regardless of the origin of the update (not only by the rule itself). This attribute is ideal for calculation rules where you have a number of rules that modify a fact and you do not want any rule re-matching and firing again. Example: lock-on-active true ruleflow-group A string identifying a rule flow group. In rule flow groups, rules can fire only when the group is activated by the associated rule flow. Example: ruleflow-group "GroupName" dialect A string identifying either JAVA or MVEL as the language to be used for code expressions in the rule. By default, the rule uses the dialect specified at the package level. Any dialect specified here overrides the package dialect setting for the rule. Example: dialect "JAVA" Note When you use Red Hat Decision Manager without the executable model, the dialect "JAVA" rule consequences support only Java 5 syntax. For more information about executable models, see Packaging and deploying an Red Hat Decision Manager project .
[ "c : Customer() Applicant( this == c )", "'Applicant.creditRating' : ['AA', 'OK', 'Sub prime']", "'Loan.status' : ['A=Approved','D=Declined']", "'Insurance.policyType' : ['Home', 'Car'] 'Insurance.coverage[policyType=Home]' : ['property', 'liability'] 'Insurance.coverage[policyType=Car]' : ['collision', 'fullCoverage']", "'Applicant.region' : (new com.mycompany.DataHelper()).getListOfRegions()", "'Applicant.region[countryCode]' : '(new com.mycompany.DataHelper()).getListOfRegions(\"@{countryCode}\")'", "public class SampleDataSource { public Map<String, List<String>> loadData() { Map data = new HashMap(); List d = new ArrayList(); d.add(\"AU\"); d.add(\"DE\"); d.add(\"ES\"); d.add(\"UK\"); d.add(\"US\"); data.put(\"Applicant.region\", d); return data; } }", "=(new SampleDataSource()).loadData()" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-rule-templates-create-proc
Chapter 2. Enabling and using the Red Hat Quay API
Chapter 2. Enabling and using the Red Hat Quay API By leveraging the Red Hat Quay API, you can streamline container registry management, automate tasks, and integrate Red Hat Quay's functionalities into your existing workflow. This can improve efficiency, offer enhanced flexibility (by way of repository management, user management, user permissions, image management, and so on), increase the stability of your organization, repository, or overall deployment, and more. Detailed instructions for how to use the Red Hat Quay API can be found in the Red Hat Quay API guide . In that guide, the following topics are covered: Red Hat Quay token types, including OAuth 2 access tokens, robot account tokens, and OCI referrers tokens, and how to generate these tokens. Enabling the Red Hat Quay API by configuring your config.yaml file. How to use the Red Hat Quay API by passing in your OAuth 2 account token into the desired endpoint. API examples, including one generic example of how an administrator might automate certain tasks. See the Red Hat Quay API guide before attempting to use the API endpoints offered in this chapter.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/use_red_hat_quay/enabling-using-the-api
Chapter 8. Fixed issues
Chapter 8. Fixed issues The issues fixed in Streams for Apache Kafka 2.9 on RHEL. For details of the issues fixed in Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes. Table 8.1. Streams for Apache Kafka fixed issues Issue Number Description ENTMQST-4324 Make it possible to use Cruise Control to move all data between two JBOD disks ENTMQST-5318 [KAFKA] Improve MirrorMaker logging in case of authorization errors ENTMQST-6234 [BRIDGE] path label in metrics can contain very different values and that makes it hard to work with the metrics 8.1. Security updates Check the latest information about Streams for Apache Kafka security updates in the Red Hat Product Advisories portal. 8.2. Erratas Check the latest security and product enhancement advisories for Streams for Apache Kafka.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/resolved-issues-str
Deploying installer-provisioned clusters on bare metal
Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.14 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/index
Chapter 115. SEDA
Chapter 115. SEDA Both producer and consumer are supported The SEDA component provides asynchronous SEDA behavior, so that messages are exchanged on a BlockingQueue and consumers are invoked in a separate thread from the producer. Note that queues are only visible within a single CamelContext. If you want to communicate across CamelContext instances (for example, communicating between Web applications), see the component. This component does not implement any kind of persistence or recovery, if the VM terminates while messages are yet to be processed. If you need persistence, reliability or distributed SEDA, try using either JMS or ActiveMQ. Note Synchronous The Direct component provides synchronous invocation of any consumers when a producer sends a message exchange. 115.1. Dependencies When using seda with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-seda-starter</artifactId> </dependency> 115.2. URI format Where someName can be any string that uniquely identifies the endpoint within the current CamelContext. 115.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 115.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 115.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 115.4. Component Options The SEDA component supports 10 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Sets the default number of concurrent threads processing exchanges. 1 int defaultPollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int defaultBlockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean defaultDiscardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean defaultOfferTimeout (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. long lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultQueueFactory (advanced) Sets the default queue factory. BlockingQueueFactory queueSize (advanced) Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 int 115.5. Endpoint Options The SEDA endpoint is configured using URI syntax: with the following path and query parameters: 115.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of queue. String 115.5.2. Query Parameters (18 parameters) Name Description Default Type size (common) The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component. 1000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent threads processing exchanges. 1 int exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern limitConcurrentConsumers (consumer (advanced)) Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off. true boolean multipleConsumers (consumer (advanced)) Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. false boolean pollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int purgeWhenStopping (consumer (advanced)) Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. false boolean blockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean discardIfNoConsumers (producer) Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean discardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean offerTimeout (producer) Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value. long timeout (producer) Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. 30000 long waitForTaskToComplete (producer) Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected. Enum values: Never IfReplyExpected Always IfReplyExpected WaitForTaskToComplete queue (advanced) Define the queue instance which will be used by the endpoint. BlockingQueue 115.6. Choosing BlockingQueue implementation By default, the SEDA component always intantiates LinkedBlockingQueue, but you can use different implementation, you can reference your own BlockingQueue implementation, in this case the size option is not used <bean id="arrayQueue" class="java.util.ArrayBlockingQueue"> <constructor-arg index="0" value="10" ><!-- size --> <constructor-arg index="1" value="true" ><!-- fairness --> </bean> <!-- ... and later --> <from>seda:array?queue=#arrayQueue</from> Or you can reference a BlockingQueueFactory implementation, 3 implementations are provided LinkedBlockingQueueFactory, ArrayBlockingQueueFactory and PriorityBlockingQueueFactory: <bean id="priorityQueueFactory" class="org.apache.camel.component.seda.PriorityBlockingQueueFactory"> <property name="comparator"> <bean class="org.apache.camel.demo.MyExchangeComparator" /> </property> </bean> <!-- ... and later --> <from>seda:priority?queueFactory=#priorityQueueFactory&size=100</from> 115.7. Use of Request Reply The SEDA component supports using Request Reply, where the caller will wait for the Async route to complete. For instance: from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("seda:input"); from("seda:input").to("bean:processInput").to("bean:createResponse"); In the route above, we have a TCP listener on port 9876 that accepts incoming requests. The request is routed to the seda:input queue. As it is a Request Reply message, we wait for the response. When the consumer on the seda:input queue is complete, it copies the response to the original message response. 115.8. Concurrent consumers By default, the SEDA endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So instead of thread pools you can use: from("seda:stageName?concurrentConsumers=5").process(...) As for the difference between the two, note a thread pool can increase/shrink dynamically at runtime depending on load, whereas the number of concurrent consumers is always fixed. 115.9. Thread pools Be aware that adding a thread pool to a SEDA endpoint by doing something like: from("seda:stageName").thread(5).process(...) Can wind up with two BlockQueues : one from the SEDA endpoint, and one from the workqueue of the thread pool, which may not be what you want. Instead, you might wish to configure a Direct endpoint with a thread pool, which can process messages both synchronously and asynchronously. For example: from("direct:stageName").thread(5).process(...) You can also directly configure number of threads that process messages on a SEDA endpoint using the concurrentConsumers option. 115.10. Sample In the route below we use the SEDA queue to send the request to this async queue to be able to send a fire-and-forget message for further processing in another thread, and return a constant reply in this thread to the original caller. We send a Hello World message and expects the reply to be OK. @Test public void testSendAsync() throws Exception { MockEndpoint mock = getMockEndpoint("mock:result"); mock.expectedBodiesReceived("Hello World"); // START SNIPPET: e2 Object out = template.requestBody("direct:start", "Hello World"); assertEquals("OK", out); // END SNIPPET: e2 assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { // START SNIPPET: e1 public void configure() throws Exception { from("direct:start") // send it to the seda queue that is async .to("seda:") // return a constant response .transform(constant("OK")); from("seda:").to("mock:result"); } // END SNIPPET: e1 }; } The "Hello World" message will be consumed from the SEDA queue from another thread for further processing. Since this is from a unit test, it will be sent to a mock endpoint where we can do assertions in the unit test. 115.11. Using multipleConsumers In this example we have defined two consumers. @Test public void testSameOptionsProducerStillOkay() throws Exception { getMockEndpoint("mock:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:bar").expectedBodiesReceived("Hello World"); template.sendBody("seda:foo", "Hello World"); assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("seda:foo?multipleConsumers=true").routeId("foo").to("mock:foo"); from("seda:foo?multipleConsumers=true").routeId("bar").to("mock:bar"); } }; } Since we have specified multipleConsumers=true on the seda foo endpoint we can have those two consumers receive their own copy of the message as a kind of pub-sub style messaging. As the beans are part of an unit test they simply send the message to a mock endpoint. 115.12. Extracting queue information If needed, information such as queue size, etc. can be obtained without using JMX in this fashion: SedaEndpoint seda = context.getEndpoint("seda:xxxx"); int size = seda.getExchanges().size(); 115.13. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.seda.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.seda.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.seda.concurrent-consumers Sets the default number of concurrent threads processing exchanges. 1 Integer camel.component.seda.default-block-when-full Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false Boolean camel.component.seda.default-discard-when-full Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false Boolean camel.component.seda.default-offer-timeout Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. Long camel.component.seda.default-poll-timeout The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 Integer camel.component.seda.default-queue-factory Sets the default queue factory. The option is a org.apache.camel.component.seda.BlockingQueueFactory<org.apache.camel.Exchange> type. BlockingQueueFactory camel.component.seda.enabled Whether to enable auto configuration of the seda component. This is enabled by default. Boolean camel.component.seda.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.seda.queue-size Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 Integer
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-seda-starter</artifactId> </dependency>", "seda:someName[?options]", "seda:name", "<bean id=\"arrayQueue\" class=\"java.util.ArrayBlockingQueue\"> <constructor-arg index=\"0\" value=\"10\" ><!-- size --> <constructor-arg index=\"1\" value=\"true\" ><!-- fairness --> </bean> <!-- ... and later --> <from>seda:array?queue=#arrayQueue</from>", "<bean id=\"priorityQueueFactory\" class=\"org.apache.camel.component.seda.PriorityBlockingQueueFactory\"> <property name=\"comparator\"> <bean class=\"org.apache.camel.demo.MyExchangeComparator\" /> </property> </bean> <!-- ... and later --> <from>seda:priority?queueFactory=#priorityQueueFactory&size=100</from>", "from(\"mina:tcp://0.0.0.0:9876?textline=true&sync=true\").to(\"seda:input\"); from(\"seda:input\").to(\"bean:processInput\").to(\"bean:createResponse\");", "from(\"seda:stageName?concurrentConsumers=5\").process(...)", "from(\"seda:stageName\").thread(5).process(...)", "from(\"direct:stageName\").thread(5).process(...)", "@Test public void testSendAsync() throws Exception { MockEndpoint mock = getMockEndpoint(\"mock:result\"); mock.expectedBodiesReceived(\"Hello World\"); // START SNIPPET: e2 Object out = template.requestBody(\"direct:start\", \"Hello World\"); assertEquals(\"OK\", out); // END SNIPPET: e2 assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { // START SNIPPET: e1 public void configure() throws Exception { from(\"direct:start\") // send it to the seda queue that is async .to(\"seda:next\") // return a constant response .transform(constant(\"OK\")); from(\"seda:next\").to(\"mock:result\"); } // END SNIPPET: e1 }; }", "@Test public void testSameOptionsProducerStillOkay() throws Exception { getMockEndpoint(\"mock:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:bar\").expectedBodiesReceived(\"Hello World\"); template.sendBody(\"seda:foo\", \"Hello World\"); assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"seda:foo?multipleConsumers=true\").routeId(\"foo\").to(\"mock:foo\"); from(\"seda:foo?multipleConsumers=true\").routeId(\"bar\").to(\"mock:bar\"); } }; }", "SedaEndpoint seda = context.getEndpoint(\"seda:xxxx\"); int size = seda.getExchanges().size();" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-seda-component-starter
2.2.2.2. Command Changes
2.2.2.2. Command Changes This sections lists the most important changes to commands and their options: The network --device option can now refer to devices by MAC addresses instead of device name. Similar to disks, network device names can also change across reboots depending on the order in which devices are probed. In order to allow consistent naming in Kickstart, you could use an entry similar to the following: The langsupport , key and mouse commands have been removed. Any use of these commands will result in a syntax error. The monitor command has also been deprecated. Instead of langsupport , add the appropriate group to the %packages section of your Kickstart file. For example, to include French support, add @french-support . There is no replacement for the key option, as an installation key is no longer requested during install. Simply remove this option from your file. The mouse and monitor commands are not required as X.Org can detect and configure settings automatically. For the same reason, the xconfig --resolution= command is no longer valid, and these can all be safely removed from the file. The part --start and part --end commands have been deprecated and have no effect. Anaconda no longer allows creating partitions at specific sector boundaries. If you require a more strict level of partitioning, use an external tool in %pre and then tell Anaconda to use existing partitions with the part --onpart command. Otherwise, create partitions with a certain size or use --grow . Instead of creating groups manually in %post , you can now use the group command to create them for you. See the complete Kickstart documentation for more details. The rescue command automatically enters the installer's rescue mode for recovery and repair. You can optionally use the --nomount (to not mount any file systems) or the --romount (mount in read-only mode) options to the rescue command. The sshpw command has been added. It is used to control the accounts created in the installation environment that are remotely logged into while installation is taking place. The updates command has been added, allowing you to specify the location of any updates.img file to be used during installation. The fcoe command will enable the installer to activate any FCoE locations attached to the specified network interface. The default autopart algorithm has changed. For all machines, autopart will create a /boot (or other special boot loader partitions as required by the architecture) and swap. For machines with at least 50 GB of free disk space, autopart will create a reasonably sized root partition ( / ) and the rest will be assigned to /home . For those machines with less space, only root ( / ) will be created. If you do not want a /home volume created for you, do not use autopart. Instead, specify /boot , swap and / , making sure to allow the root volume to grow as necessary. Anaconda now includes a new storage filtering interface to control which devices are visible during installation. This interface corresponds to the existing ignoredisk , clearpart and zerombr commands. Because ignoredisk is optional, excluding it from the Kickstart file will not cause the filter UI to appear during installation. If you wish to use this interface, add: The --size=1 --grow option from the /tmp/partition-include file can no longer be used. You must specify a reasonable default size and partitions will grow accordingly.
[ "network --device=00:11:22:33:44:55 --bootproto=dhcp", "ignoredisk --interactive" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-installation-graphical_installer-kickstart-command_changes
Chapter 4. Using SSL to protect connections to Red Hat Quay
Chapter 4. Using SSL to protect connections to Red Hat Quay 4.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate , you must create a Certificate Authority (CA) and then generate the required key and certificate files. Note The following examples assume you have configured the server hostname quay-server.example.com using DNS or another naming mechanism, such as adding an entry in your /etc/hosts file: USD cat /etc/hosts ... 192.168.1.112 quay-server.example.com 4.2. Creating a certificate authority and signing a certificate Use the following procedures to create a certificate file and a primary key file named ssl.cert and ssl.key . 4.2.1. Creating a certificate authority Use the following procedure to create a certificate authority (CA) Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com 4.2.2. Signing a certificate Use the following procedure to sign a certificate. Procedure Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Create a configuration file openssl.cnf , specifying the server hostname, for example: openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 4.3. Configuring SSL using the command line interface Use the following procedure to configure SSL/TLS using the command line interface. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key USDQUAY/config Change into the USDQUAY/config directory by entering the following command: USD cd USDQUAY/config Edit the config.yaml file and specify that you want Red Hat Quay to handle TLS/SSL: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop quay Restart the registry by entering the following command: 4.4. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed the certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 4.5. Testing SSL configuration using the command line Use the podman login command to attempt to log in to the Quay registry with SSL enabled: Podman does not trust self-signed certificates. As a workaround, use the --tls-verify option: Configuring Podman to trust the root Certificate Authority (CA) is covered in a subsequent section. 4.6. Testing SSL configuration using the browser When you attempt to access the Quay registry, in this case, https://quay-server.example.com , the browser warns of the potential risk: Proceed to the log in screen, and the browser will notify you that the connection is not secure: Configuring the system to trust the root Certificate Authority (CA) is covered in the subsequent section. 4.7. Configuring podman to trust the Certificate Authority Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/ and /etc/docker/certs.d/ . Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file ca.crt : Alternatively, if you are using Docker, you can copy the root CA file to the equivalent Docker directory: You should no longer need to use the --tls-verify=false option when logging in to the registry: 4.8. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
[ "cat /etc/hosts 192.168.1.112 quay-server.example.com", "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "cp ~/ssl.cert ~/ssl.key USDQUAY/config", "cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop quay", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.9.10", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.9.10 config secret", "sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.9.10", "sudo podman login quay-server.example.com Username: quayadmin Password: Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com Username: quayadmin Password: Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo cp rootCA.pem /etc/docker/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com Username: quayadmin Password: Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/using-ssl-to-protect-quay
Appendix A. Automation execution environments precedence
Appendix A. Automation execution environments precedence Project updates will always use the control plane automation execution environments by default, however, jobs will use the first available automation execution environments as follows: The execution_environment defined on the template (job template or inventory source) that created the job. The default_environment defined on the project that the job uses. The default_environment defined on the organization of the job. The default_environment defined on the organization of the inventory the job uses. The current DEFAULT_EXECUTION_ENVIRONMENT setting (configurable at api/v2/settings/system/ ) Any image from the GLOBAL_JOB_EXECUTION_ENVIRONMENTS setting. Any other global execution environment. Note If more than one execution environment fits a criteria (applies for 6 and 7), the most recently created one is used.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/creating_and_consuming_execution_environments/con-ee-precedence
Chapter 14. Upgrading the Red Hat Quay Operator Overview
Chapter 14. Upgrading the Red Hat Quay Operator Overview The Red Hat Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Red Hat Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Red Hat Quay to deploy ; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Red Hat Quay on Kubernetes. 14.1. Operator Lifecycle Manager The Red Hat Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM) . When creating a Subscription with the default approvalStrategy: Automatic , OLM will automatically upgrade the Red Hat Quay Operator whenever a new version becomes available. Warning When the Red Hat Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the OperatorHub page for the Red Hat Quay Operator during installation. It can also be found in the Red Hat Quay Operator Subscription object by the approvalStrategy field. Choosing Automatic means that your Red Hat Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected. 14.2. Upgrading the Red Hat Quay Operator The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators . In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.13: 3.11.z 3.13.z 3.12.z 3.13.z For users on standalone deployments of Red Hat Quay wanting to upgrade to 3.13, see the Standalone upgrade guide. 14.2.1. Upgrading Red Hat Quay to version 3.13 To update Red Hat Quay from one minor version to the , for example, 3.12.z 3.13, you must change the update channel for the Red Hat Quay Operator. Procedure In the OpenShift Container Platform Web Console, navigate to Operators Installed Operators . Click on the Red Hat Quay Operator. Navigate to the Subscription tab. Under Subscription details click Update channel . Select stable-3.13 Save . Check the progress of the new installation under Upgrade status . Wait until the upgrade status changes to 1 installed before proceeding. In your OpenShift Container Platform cluster, navigate to Workloads Pods . Existing pods should be terminated, or in the process of being terminated. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade . After the clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade pods are marked as Completed , the remaining pods for your Red Hat Quay deployment spin up. This takes approximately ten minutes. Verify that the quay-database uses the postgresql-13 image, and clair-postgres pods now uses the postgresql-15 image. After the quay-app pod is marked as Running , you can reach your Red Hat Quay registry. 14.2.2. Upgrading to the minor release version For z stream upgrades, for example, 3.12.1 3.12.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic , the Red Hat Quay Operator upgrades automatically to the newest z stream. This results in automatic, rolling Red Hat Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. 14.2.3. Upgrading from Red Hat Quay 3.12 to 3.13 With Red Hat Quay 3.13, the volumeSize parameter has been implemented for use with the clairpostgres component of the QuayRegistry custom resource definition (CRD). This replaces the volumeSize parameter that was previously used for the clair component of the same CRD. If your Red Hat Quay 3.12 QuayRegistry custom resource definition (CRD) implemented a volume override for the clair component, you must ensure that the volumeSize field is included under the clairpostgres component of the QuayRegistry CRD. Important Failure to move volumeSize from the clair component to the clairpostgres component will result in a failed upgrade to version 3.13. For example: spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size> 14.2.4. Changing the update channel for the Red Hat Quay Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Red Hat Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Red Hat Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. 14.2.5. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Red Hat Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Red Hat Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade. The following image shows the Subscription tab in the UI, including the update Channel , the Approval strategy, the Upgrade status and the InstallPlan : The list of Installed Operators provides a high-level summary of the current Quay installation: 14.3. Upgrading a QuayRegistry resource When the Red Hat Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: If status.currentVersion is unset, reconcile as normal. If status.currentVersion equals the Operator version, reconcile as normal. If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator's version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone. 14.4. Upgrading a QuayEcosystem Upgrades are supported from versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry , use the following procedure. Procedure Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem . USD oc edit quayecosystem <quayecosystemname> metadata: labels: quay-operator/migrate: "true" Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem . The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true" . After the status.registryEndpoint of the new QuayRegistry is set, access Red Hat Quay and confirm that all data and settings were migrated successfully. If everything works correctly, you can delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources. 14.4.1. Reverting QuayEcosystem Upgrade If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry , follow these steps to revert back to using the QuayEcosystem : Procedure Delete the QuayRegistry using either the UI or kubectl : USD kubectl delete -n <namespace> quayregistry <quayecosystem-name> If external access was provided using a Route , change the Route to point back to the original Service using the UI or kubectl . Note If your QuayEcosystem was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but Red Hat Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. 14.4.2. Supported QuayEcosystem Configurations for Upgrades The Red Hat Quay Operator reports errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Red Hat Quay's config.yaml file. Database Ephemeral database not supported ( volumeSize field must be set). Redis Nothing special needed. External Access Only passthrough Route access is supported for automatic migration. Manual migration required for other methods. LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true" , delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app . Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service ; the Quay Operator will not manage it. LoadBalancer / NodePort / Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app . Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service . Clair Nothing special needed. Object Storage QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. Repository Mirroring Nothing special needed. Additional resources For more details on the Red Hat Quay Operator, see the upstream quay-operator project.
[ "spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size>", "oc edit quayecosystem <quayecosystemname>", "metadata: labels: quay-operator/migrate: \"true\"", "kubectl delete -n <namespace> quayregistry <quayecosystem-name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/operator-upgrade
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_rest_api/making-open-source-more-inclusive_datagrid
Providing feedback on Red Hat Ceph Storage documentation
Providing feedback on Red Hat Ceph Storage documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so, create a Bugzilla ticket: + . Go to the Bugzilla website. . In the Component drop-down, select Documentation . . In the Sub-Component drop-down, select the appropriate sub-component. . Select the appropriate version of the document. . Fill in the Summary and Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. . Optional: Add an attachment, if any. . Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/8.0_release_notes/providing-feedback-on-red-hat-ceph-storage-documentation
Chapter 3. Security and SAP Solutions
Chapter 3. Security and SAP Solutions Enterprises usually have substantial compliance requirements based on the industry, type of customers, geographic location, and more. Such requirements may need specific certifications, cryptographic modules, and support for encryptions. With Red Hat Enterprise Linux for SAP Solutions, Red{nbsp]Hat delivers a stable, security-focused, high-performance foundation for SAP business applications to support such requirements and provide an easy way to set and validate compliance policies. You can learn about processes and practices for securing Red Hat Enterprise Linux systems against local and remote intrusion, exploitation, and malicious activity. These approaches and tools can create a more secure environment for running SAP HANA. Additional resources For more information, see Security hardening guide for SAP HANA . 3.1. SELinux for SAP production environments SELinux is a security technology for process isolation to mitigate attacks via privilege escalation. Configuring SELinux helps you enhance your system's security. SELinux is an implementation of Mandatory Access Control (MAC), and provides an additional layer of security. The SELinux policy defines how users and processes can interact with the files on the system. You can control which users can perform which actions by mapping them to specific SELinux confined users. Additional resources For more information, see Using SELinux for SAP HANA . 3.2. File access policy Daemon for SAP File access policy Daemon fapolicyd is a technology provided in RHEL to determine access rights to files based on a trust database and file or process attributes. It helps customers to ensure data remains protected even in case an attacker has successfully gained control over certain processes. You can configure fapolicyd to secure the environment for running SAP HANA against local and remote intrusion, exploitation, and malicious activity. Additional resources For more information, see Configuring fapolicyd to allow only SAP HANA executables .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/overview_of_red_hat_enterprise_linux_for_sap_solutions_subscription/assembly_security-and-sap-solutions_overview-of-rhel-for-sap-solutions-subscription-combined-9