Dataset Viewer
Auto-converted to Parquet
title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 41. PodDisruptionBudgetTemplate schema reference
Chapter 41. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. AMQ Streams creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 41.1. PodDisruptionBudgetTemplate schema properties Property Description metadata Metadata to apply to the PodDisruptionBudgetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer
[ "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-poddisruptionbudgettemplate-reference
Chapter 6. Getting Started with OptaPlanner and Quarkus
Chapter 6. Getting Started with OptaPlanner and Quarkus You can use the https://code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. You can then download the Quarkus Maven repository or use the online Maven repository with your project. 6.1. Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. Maven repositories A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Quarkus projects or you can download the Red Hat build of Quarkus Maven repository. Maven plug-ins Maven plug-ins are defined parts of a POM file that achieve one or more goals. Quarkus applications use the following Maven plug-ins: Quarkus Maven plug-in ( quarkus-maven-plugin ): Enables Maven to create Quarkus projects, supports the generation of uber-JAR files, and provides a development mode. Maven Surefire plug-in ( maven-surefire-plugin ): Used during the test phase of the build lifecycle to execute unit tests on your application. The plug-in generates text and XML files that contain the test reports. 6.1.1. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 6.1.2. Downloading and configuring the Quarkus Maven repository If you do not want to use the online Maven repository, you can download and configure the Quarkus Maven repository to create a Quarkus application with Maven. The Quarkus Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the settings.xml file to configure the Quarkus Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Download the Red Hat build of Quarkus Maven repository ZIP file from the Software Downloads page of the Red Hat Customer Portal (login required). Expand the downloaded archive. Change directory to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the settings.xml file, where QUARKUS_MAVEN_REPOSITORY is the path of the Quarkus Maven repository that you downloaded. The format of QUARKUS_MAVEN_REPOSITORY must be file://USDPATH , for example file:///home/userX/rh-quarkus-2.13.GA-maven-repository/maven-repository . <!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where ARTIFACT_NAME is the name of a missing artifact and PROJECT_NAME is the name of the project you are trying to build: Missing artifact PROJECT_NAME [ERROR] Failed to execute goal on project ARTIFACT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts. 6.2. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 6.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 6.3. Creating a Red Hat build of Quarkus Maven project using code.quarkus.redhat.com You can use the code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters required to compile your project into a native executable. This section walks you through the process of generating an OptaPlanner Maven project and includes the following topics: Specifying basic details about your application. Choosing the extensions that you want to include in your project. Generating a downloadable archive with your project files. Using the custom commands for compiling and starting your application. Prerequisites You have a web browser. Procedure Open https://code.quarkus.redhat.com in your web browser: Specify details about your project: Enter a group name for your project. The format of the name follows the Java package naming convention, for example, com.example . Enter a name that you want to use for Maven artifacts generated from your project, for example code-with-quarkus . Select Build Tool > Maven to specify that you want to create a Maven project. The build tool that you choose determines the items: The directory structure of your generated project The format of configuration files used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create OptaPlanner Maven projects only. Generating Gradle projects is not supported by Red Hat. Enter a version to be used in artifacts generated from your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended, but you can use a different type of versioning if you prefer. Enter the package name of artifacts that the build tool generates when you package your project. According to the Java package naming conventions the package name should match the group name that you use for your project, but you can specify a different name. Note The code.quarkus.redhat.com website automatically uses the latest release of OptaPlanner. You can manually change the BOM version in the pom.xml file after you generate your project. Select the following extensions to include as dependencies: RESTEasy JAX-RS (quarkus-resteasy) RESTEasy Jackson (quarkus-resteasy-jackson) OptaPlanner AI constraint solver(optaplanner-quarkus) OptaPlanner Jackson (optaplanner-quarkus-jackson) Red Hat provides different levels of support for individual extensions on the list, which are indicated by labels to the name of each extension: SUPPORTED extensions are fully supported by Red Hat for use in enterprise applications in production environments. TECH-PREVIEW extensions are subject to limited support by Red Hat in production environments under the Technology Preview Features Support Scope . DEV-SUPPORT extensions are not supported by Red Hat for use in production environments, but the core functionalities that they provide are supported by Red Hat developers for use in developing new applications. DEPRECATED extension are planned to be replaced with a newer technology or implementation that provides the same functionality. Unlabeled extensions are not supported by Red Hat for use in production environments. Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. The overlay screen also shows the custom command that you can use to compile and start your application. Select Download the ZIP to save the archive with the generated project files to your system. Extract the contents of the archive. Navigate to the directory that contains your extracted project files: cd <directory_name> Compile and start your application in development mode: ./mvnw compile quarkus:dev 6.4. Creating a Red Hat build of Quarkus Maven project using the Quarkus CLI You can use the Quarkus command line interface (CLI) to create a Quarkus OptaPlanner project. Prerequisites You have installed the Quarkus CLI. For information, see Building Quarkus Apps with Quarkus Command Line Interface . Procedure Create a Quarkus application: To view the available extensions, enter the following command: This command returns the following extensions: Enter the following command to add extensions to the project's pom.xml file: Open the pom.xml file in a text editor. The contents of the file should look similar to the following example:
[ "<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>", "<!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>", "mvn --version", "mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples", "cd optaplanner-quickstart", "<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>", "cd <directory_name>", "./mvnw compile quarkus:dev", "quarkus create app -P io.quarkus:quarkus-bom:2.13.Final-redhat-00006", "quarkus ext -i", "optaplanner-quarkus optaplanner-quarkus-benchmark optaplanner-quarkus-jackson optaplanner-quarkus-jsonb", "quarkus ext add resteasy-jackson quarkus ext add optaplanner-quarkus quarkus ext add optaplanner-quarkus-jackson", "<?xml version=\"1.0\"?> <project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>code-with-quarkus-optaplanner</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <compiler-plugin.version>3.8.1</compiler-plugin.version> <maven.compiler.parameters>true</maven.compiler.parameters> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.version>2.13.Final-redhat-00006</quarkus.platform.version> <surefire-plugin.version>3.0.0-M5</surefire-plugin.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>optaplanner-quarkus</artifactId> <version>2.2.2.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-arc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <parameters>USD{maven.compiler.parameters}</parameters> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <properties> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/optaplanner-quarkus-con_getting-started-optaplanner
Appendix A. Using LDAP Client Tools
Appendix A. Using LDAP Client Tools Red Hat Directory Server uses the LDAP tools (such as ldapsearch and ldapmodify ) supplied with OpenLDAP. The OpenLDAP tool options are described in the OpenLDAP man pages at http://www.openldap.org/software/man.cgi . This appendix gives some common usage scenarios and examples for using these LDAP tools. More extensive examples for using ldapsearch are given in Chapter 14, Finding Directory Entries . More examples for using ldapmodify and ldapdelete are given in Chapter 3, Managing Directory Entries . A.1. Running Extended Operations Red Hat Directory Server supports a variety of extended operations, especially extended search operations. An extended operation passes an additional operation (such as a get effective rights search or server-side sort) along with the LDAP operation. Likewise, LDAP clients have the potential to support a number of extended operations. The OpenLDAP LDAP tools support extended operations in two ways. All client tools ( ldapmodify , ldapsearch , and the others) use either the -e or -E options to send an extended operation. The -e argument can be used with any OpenLDAP client tool and sends general instructions about the operation, like how to handle password policies. The -E is used only with ldapsearch es and passes more useful controls like GER searches, sort and page information, and information for other, not-explicitly-support extended operations. Additionally, OpenLDAP has another tool, ldapexop , which is used exclusively to perform extended search operations, the same as running ldapsearch -E . The format of an extended operation with ldapsearch is generally: When an extended operation is explicitly handled by the OpenLDAP tools, then the extended_operation_type can be an alias, like deref for a dereference search or sss for server-side sorting. A supported extended operation has formatted output. Other extended operations, like GER searches, are passed using their OID rather than an alias, and then the extended_operation_type is the OID. For those unsupported operations the tool does not recognize the response from the server, so the output is unformatted. For example, the pg extended operation type formats the results in simple pages: The same operation with ldapexop can be run using only the OID of the simple paged results operation and the operation's settings (3 results per page): However, ldapexop does not accept the same range of search parameters that ldapsearch does, making it less flexible.
[ "-E extended_operation_type = operation_parameters", "ldapsearch -x -D \"cn=Directory Manager\" -W -b \"ou=Engineers,ou=People,dc=example,dc=com\" -E pg=3 \"(objectclass=*)\" cn dn: uid=jsmith,ou=Engineers,ou=People,dc=example,dc=com cn: John Smith dn: uid=bjensen,ou=Engineers,ou=People,dc=example,dc=com cn: Barbara Jensen dn: uid=hmartin,ou=Engineers,ou=People,dc=example,dc=com cn: Henry Martin Results are sorted. next page size (3): 5", "ldapexop 1.2.840.113556.1.4.319=3" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldap-tools-examples
15.11. Displaying Network I/O
15.11. Displaying Network I/O To view the network I/O for all virtual machines on your system: Make sure that the Network I/O statistics collection is enabled. To do this, from the Edit menu, select Preferences and click the Stats tab. Select the Network I/O check box. Figure 15.27. Enabling Network I/O To display the Network I/O statistics, from the View menu, select Graph , then the Network I/O check box. Figure 15.28. Selecting Network I/O The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system. Figure 15.29. Displaying Network I/O
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-displaying_network_io
3.8. VDB Dependencies
3.8. VDB Dependencies When deploying a virtual database (VDB) in JBoss Data Virtualization, you also have to provide dependency libraries and configuration settings for accessing the physical data sources used by your VDB. (You can identify all dependent physical data sources by looking in META-INF/vdb.xml within the EAP_HOME/MODE /deployments/ DATABASE .vdb file.) For example, if you are trying to integrate Oracle and file sources in your VDB, then you are responsible for providing both the JDBC driver for the Oracle source, and any necessary documents and configuration files that are needed by the file translator. Data source instances may be shared between multiple VDBs and applications. Consider sharing connections to sources that are heavy-weight and resource-constrained. Once you have deployed the VDB and its dependencies, client applications can connect using the JDBC API. If there are any errors in the deployment, the connection attempt will fail and a message will be logged. Use the Management Console (or check the log files) to identify any errors and correct them so you can proceed. See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on how to use JDBC to connect to your VDB. Warning Some data source configuration files may contain passwords or other sensitive information. For instructions on how to avoid storing passwords in plaintext, refer to the JBoss Enterprise Application Platform Security Guide .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/vdb_dependencies
Chapter 2. Creating definitions
Chapter 2. Creating definitions When creating an automated rule definition, you can configure numerous options. Cryostat uses an automated rule to apply rules to any JVM targets that match regular expressions defined in the matchExpression string expression. You can apply Red Hat OpenShift labels or annotations as criteria for a matchExpression definition. After you specify a rule definition for your automated rule, you do not need to re-add or restart matching targets. If you have defined matching targets, you can immediately activate a rule definition. If you want to reuse an existing automated rule definition, you can upload your definition in JSON format to Cryostat. 2.1. Enabling or disabling existing automated rules You can enable or disable existing automated rules by using a toggle switch on the Cryostat web console. Prerequisites Logged in to the Cryostat web console. Created an automated rule. Procedure From the Cryostat web console, click Automated Rules . The Automated Rules window opens and displays your automated rule in a table. Figure 2.1. Example of match expression output from completing an automated rule In the Enabled column, view the Enabled status of the listed automated rules. Depending on the status, choose one of the following actions: To enable the automated rule, click the toggle switch to On . Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. To disable the automated rule, click the toggle switch to Off . The Disable your Automated Rule window opens. To disable the selected automated rule, click Disable . If you want to also stop any active recordings that were created by the selected rule, select Clean then click Disable . 2.2. Creating an automated rule definition While creating an automated rule on the Cryostat web console, you can specify the match expression that Cryostat uses to select all the applications. Then, Cryostat starts a new recording by using a JFR event template that was defined by the rule. If you previously created an automated rule and Cryostat identifies a new target application, Cryostat tests if the new application instance matches the expression and starts a new recording by using the associated event template. Prerequisites Created a Cryostat instance in your Red Hat OpenShift project. Created a Java application. Installed Cryostat 2.4 on Red Hat OpenShift by using the OperatorHub option. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click Create . A Create window opens. Figure 2.2. The Create window (Graph View) for an automated rule Enter a rule name in the Name field. In the Match Expression field, specify the match expression details. Note Select the question mark icon to view suggested syntax in a Match Expression Hint snippet. In the Match Expression Visualizer panel, the Graph View option highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. Optional: In the Match Expression Visualizer panel, you can also click List View , which displays the matched target JVMs as expandable rows. Figure 2.3. The Create window (List View) for an automated rule From the Template list, select an event template. To create your automated rule, click Create . The Automated Rules window opens and displays your automated rule in a table. Figure 2.4. Example of match expression output from completing an automated rule If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. Optional: You can download an automated rule by clicking Download from the automated rule's overflow menu. You can then configure a rule definition in your preferred text editor or make additional copies of the file on your local file system. 2.3. Cryostat Match Expression Visualizer panel You can use the Match Expression Visualizer panel on the web console to view information in a JSON structure for your selected target JVM application. You can choose to display the information in a Graph View or a List View mode. The Graph View highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. The List View displays the matched target JVM as expandable rows. To view details about a matched target JVM, select the target JVM that is highlighted. In the window that appears, information specific to the metadata for your application is shown in the Details tab. You can use any of this information as syntax in your match expression. A match expression is a rule definition parameter that you can specify for your automated rule. After you specify match expressions and created the automated rule, Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. 2.4. Uploading an automated rule in JSON You can reuse an existing automated rule by uploading it to the Cryostat web console, so that you can quickly start monitoring a running Java application. Prerequisites Created a Cryostat instance in your project. See Installing Cryostat on OpenShift using an operator (Installing Cryostat). Created a Java application. Created an automated rules file in JSON format. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click the file upload icon, which is located beside the Create button. Figure 2.5. The automated rules upload button The Upload Automated Rules window opens. Click Upload and locate your automated rules files on your local system. You can upload one or more files to Cryostat. Alternatively, you can drag files from your file explorer tool and drop them into the JSON File field on your web console. Important The Upload Automated Rules function only accepts files in JSON format. Figure 2.6. A window prompt where you can upload JSON files that contains your automated rules configuration Optional: If you need to remove a file from the Upload Automated Rules function, click the X icon on the selected file. Figure 2.7. Example of uploaded JSON files Click Submit . 2.5. Metadata labels When you create an automated rule to enable JFR to continuously monitor a running target application, the automated rule automatically generates a metadata label. This metadata label indicates the name of the automated rule that generates the JFR recording. After you archive the recording, you can run a query on the metadata label to locate the automated rule that generated the recording. Cryostat preserves metadata labels for the automated rule in line with the lifetime of the archived recording. Additional resources Creating definitions Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/assembly_creating-definitions_con_overview-automated-rules
Chapter 17. Impersonating the system:admin user
Chapter 17. Impersonating the system:admin user 17.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 17.2. Impersonating the system:admin user You can grant a user permission to impersonate system:admin , which grants them cluster administrator permissions. Procedure To grant a user permission to impersonate system:admin , run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username> Tip You can alternatively apply the following YAML to grant permission to impersonate system:admin : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 17.3. Impersonating the system:admin group When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups. Procedure To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2>
[ "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/impersonating-system-admin
Chapter 106. KafkaUserScramSha512ClientAuthentication schema reference
Chapter 106. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserTlsExternalClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Property type Description password Password Specify the password for the user. If not set, a new password is generated by the User Operator. type string Must be scram-sha-512 .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkauserscramsha512clientauthentication-reference
20.2. Types
20.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with mysqld . Different types allow you to configure flexible access: mysqld_db_t This type is used for the location of the MariaDB database. In Red Hat Enterprise Linux, the default location for the database is the /var/lib/mysql/ directory, however this can be changed. If the location for the MariaDB database is changed, the new location must be labeled with this type. See the example in Section 20.4.1, "MariaDB Changing Database Location" for instructions on how to change the default database location and how to label the new section appropriately. mysqld_etc_t This type is used for the MariaDB main configuration file /etc/my.cnf and any other configuration files in the /etc/mysql/ directory. mysqld_exec_t This type is used for the mysqld binary located at /usr/libexec/mysqld , which is the default location for the MariaDB binary on Red Hat Enterprise Linux. Other systems may locate this binary at /usr/sbin/mysqld which should also be labeled with this type. mysqld_unit_file_t This type is used for executable MariaDB-related files located in the /usr/lib/systemd/system/ directory by default in Red Hat Enterprise Linux. mysqld_log_t Logs for MariaDB need to be labeled with this type for proper operation. All log files in the /var/log/ directory matching the mysql.* wildcard must be labeled with this type. mysqld_var_run_t This type is used by files in the /var/run/mariadb/ directory, specifically the process id (PID) named /var/run/mariadb/mariadb.pid , which is created by the mysqld daemon when it runs. This type is also used for related socket files such as /var/lib/mysql/mysql.sock . Files such as these must be labeled correctly for proper operation as a confined service.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-mariadb-types
Chapter 41. Installation and Booting
Chapter 41. Installation and Booting Multi-threaded xz compression in rpm-build Compression can take long time for highly parallel builds as it currently uses only one core. This is problematic especially for continuous integration of large projects that are built on hardware with many cores. This feature, which is provided as a Technology Preview, enables multi-threaded xz compression for source and binary packages when setting the %_source_payload or %_binary_payload macros to the wLTX.xzdio pattern . In it, L represents the compression level, which is 6 by default, and X is the number of threads to be used (may be multiple digits), for example w6T12.xzdio . This can be done by editing the /usr/lib/rpm/macros file or by declaring the macro within the spec file or at the command line. (BZ#1278924)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology_previews_installation_and_booting
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component.
null
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/using_your_subscription
probe::nfs.proc.remove
probe::nfs.proc.remove Name probe::nfs.proc.remove - NFS client removes a file on server Synopsis nfs.proc.remove Values prot transfer protocol version NFS version (the function is used for all NFS version) server_ip IP address of server filelen length of file name filename file name fh file handle of parent dir
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-remove
Chapter 2. Deploy using dynamic storage devices
Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Red Hat Virtualization gives you the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Note Use of Vault namespaces are not supported with the Kubernetes authentication method in OpenShift Data Foundation 4.11. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/deploy-using-dynamic-storage-devices-rhv
Configuring and managing logical volumes
Configuring and managing logical volumes Red Hat Enterprise Linux 8 Configuring and managing LVM Red Hat Customer Content Services
[ "lsblk", "pvcreate /dev/sdb", "pvs PV VG Fmt Attr PSize PFree /dev/sdb lvm2 a-- 28.87g 13.87g", "pvs PV VG Fmt Attr PSize PFree /dev/sdb1 lvm2 --- 28.87g 28.87g", "pvremove /dev/sdb1", "vgreduce VolumeGroupName /dev/sdb1", "vgremove VolumeGroupName", "pvs", "pvs", "vgcreate VolumeGroupName PhysicalVolumeName1 PhysicalVolumeName2", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "vgrename OldVolumeGroupName NewVolumeGroupName", "vgs VG #PV #LV #SN Attr VSize VFree NewVolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "pvs", "vgextend VolumeGroupName PhysicalVolumeName", "pvs PV VG Fmt Attr PSize PFree /dev/sda VolumeGroupName lvm2 a-- 28.87g 28.87g /dev/sdd VolumeGroupName lvm2 a-- 1.88g 1.88g", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 1 0 0 wz--n- 28.87g 28.87g VolumeGroupName2 1 0 0 wz--n- 1.88g 1.88g", "vgmerge VolumeGroupName2 VolumeGroupName1", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 0 0 wz--n- 30.75g 30.75g", "pvmove /dev/vdb3 /dev/vdb3 : Moved: 2.0% /dev/vdb3 : Moved: 79.2% /dev/vdb3 : Moved: 100.0%", "pvcreate /dev/vdb4 Physical volume \" /dev/vdb4 \" successfully created", "vgextend VolumeGroupName /dev/vdb4 Volume group \" VolumeGroupName \" successfully extended", "pvmove /dev/vdb3 /dev/vdb4 /dev/vdb3 : Moved: 33.33% /dev/vdb3 : Moved: 100.00%", "vgreduce VolumeGroupName /dev/vdb3 Removed \" /dev/vdb3 \" from volume group \" VolumeGroupName \"", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 lvm2 a-- 1020.00m 1008.00m 12.00m", "vgsplit VolumeGroupName1 VolumeGroupName2 /dev/vdb3 Volume group \" VolumeGroupName2 \" successfully split from \" VolumeGroupName1 \"", "lvchange -a n /dev/VolumeGroupName1/LogicalVolumeName", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 1 0 wz--n- 34.30G 10.80G VolumeGroupName2 1 0 0 wz--n- 17.15G 17.15G", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 VolumeGroupName2 lvm2 a-- 1020.00m 1008.00m 12.00m", "umount /dev/mnt/ LogicalVolumeName", "vgchange -an VolumeGroupName vgchange -- volume group \"VolumeGroupName\" successfully deactivated", "vgexport VolumeGroupName vgexport -- volume group \"VolumeGroupName\" successfully exported", "pvscan PV /dev/sda1 is in exported VG VolumeGroupName [17.15 GB / 7.15 GB free] PV /dev/sdc1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free] PV /dev/sdd1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free]", "vgimport VolumeGroupName", "vgchange -ay VolumeGroupName", "mkdir -p /mnt/ VolumeGroupName /users mount /dev/ VolumeGroupName /users /mnt/ VolumeGroupName /users", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgremove VolumeGroupName", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgchange --lockstop VolumeGroupName", "vgremove VolumeGroupName", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --name LogicalVolumeName --size VolumeSize VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName linear", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs myvg'", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --stripes NumberOfStripes --stripesize StripeSize --size LogicalVolumeSize --name LogicalVolumeName VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName striped", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --type raid level --stripes NumberOfStripes --stripesize StripeSize --size Size --name LogicalVolumeName VolumeGroupName", "lvcreate --type raid1 --mirrors MirrorsNumber --size Size --name LogicalVolumeName VolumeGroupName", "lvcreate --type raid10 --mirrors MirrorsNumber --stripes NumberOfStripes --stripesize StripeSize --size Size --name LogicalVolumeName VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName raid0", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --type thin-pool --size PoolSize --name ThinPoolName VolumeGroupName", "lvcreate --type thin --virtualsize MaxVolumeSize --name ThinVolumeName --thinpool ThinPoolName VolumeGroupName", "lvs -o lv_name,seg_type LV Type ThinPoolName thin-pool ThinVolumeName thin", "lvs -o lv_name,lv_size,vg_name,vg_size,vg_free LV LSize VG VSize VFree LogicalVolumeName 1.49g VolumeGroupName 30.75g 29.11g", "lvextend --size + AdditionalSize --resizefs VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,lv_size LV LSize NewLogicalVolumeName 6.49g", "lvs -o lv_name,lv_size,data_percent LV LSize Data% MyThinPool 20.10g 3.21 ThinVolumeName 1.10g 4.88", "lvextend --size + AdditionalSize --resizefs VolumeGroupName / ThinVolumeName", "lvs -o lv_name,lv_size,data_percent LV LSize Data% MyThinPool 20.10g 3.21 ThinVolumeName 6.10g 0.43", "lvs -o lv_name,seg_type,data_percent,metadata_percent LV Type Data% Meta% ThinPoolName thin-pool 97.66 26.86 ThinVolumeName thin 48.80", "lvextend -L Size VolumeGroupName/ThinPoolName", "lvs -o lv_name,seg_type,data_percent,metadata_percent LV Type Data% Meta% ThinPoolName thin-pool 24.41 16.93 ThinVolumeName thin 24.41", "lvs -o lv_name,seg_type,data_percent LV Type Data% ThinPoolName thin-pool 93.87", "lvextend -L Size VolumeGroupName/ThinPoolName _tdata", "lvs -o lv_name,seg_type,data_percent LV Type Data% ThinPoolName thin-pool 40.23", "lvs -o lv_name,seg_type,metadata_percent LV Type Meta% ThinPoolName thin-pool 75.00", "lvextend -L Size VolumeGroupName/ThinPoolName _tmeta", "lvs -o lv_name,seg_type,metadata_percent LV Type Meta% ThinPoolName thin-pool 0.19", "lvs -o lv_name,vg_name,seg_monitor LV VG Monitor ThinPoolName VolumeGroupName not monitored", "lvchange --monitor y VolumeGroupName/ThinPoolName", "thin_pool_autoextend_threshold = 70 thin_pool_autoextend_percent = 20", "systemctl restart lvm2-monitor", "lvs -o lv_name,vg_name,lv_size LV VG LSize LogicalVolumeName VolumeGroupName 6.49g", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-NewLogicalVolumeName /MountPoint", "umount /MountPoint", "e2fsck -f /dev/VolumeGroupName/LogicalVolumeName", "lvreduce --size TargetSize --resizefs VolumeGroupName/LogicalVolumeName", "mount -o remount /MountPoint", "df -hT /MountPoint/ Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolumeGroupName-NewLogicalVolumeName ext4 2.9G 139K 2.7G 1% /MountPoint", "lvs -o lv_name,lv_size LV LSize NewLogicalVolumeName 4.00g", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvrename VolumeGroupName/LogicalVolumeName VolumeGroupName/NewLogicalVolumeName", "lvs -o lv_name LV NewLogicalVolumeName", "lvs -o lv_name,lv_path LV Path LogicalVolumeName /dev/VolumeGroupName/LogicalVolumeName", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-LogicalVolumeName /MountPoint", "umount /MountPoint", "lvremove VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName VolumeGroupName/LogicalVolumeName", "lvchange --activate y VolumeGroupName / LogicalVolumeName", "lvdisplay VolumeGroupName / LogicalVolumeName LV Status available", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName /dev/VolumeGroupName/LogicalVolumeName", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-LogicalVolumeName /MountPoint", "umount /MountPoint", "lvchange --activate n VolumeGroupName / LogicalVolumeName", "lvdisplay VolumeGroupName/LogicalVolumeName LV Status NOT available", "lvs -o vg_name,lv_name,lv_size VG LV LSize VolumeGroupName LogicalVolumeName 10.00g", "lvcreate --snapshot --size SnapshotSize --name SnapshotName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,origin LV Origin LogicalVolumeName SnapshotName LogicalVolumeName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 82.00 5.00g", "lvextend --size + AdditionalSize VolumeGroupName / SnapshotName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 68.33 6.00g", "snapshot_autoextend_threshold = 70 snapshot_autoextend_percent = 20", "systemctl restart lvm2-monitor", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName /dev/VolumeGroupName/LogicalVolumeName SnapshotName VolumeGroupName /dev/VolumeGroupName/SnapshotName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/LogicalVolumeName findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/SnapshotName", "umount /LogicalVolume/MountPoint umount /Snapshot/MountPoint", "lvchange --activate n VolumeGroupName / LogicalVolumeName lvchange --activate n VolumeGroupName / SnapshotName", "lvconvert --merge SnapshotName", "lvchange --activate y VolumeGroupName / LogicalVolumeName", "umount /LogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name,pool_lv,lv_size LV VG Pool LSize PoolName VolumeGroupName 152.00m ThinVolumeName VolumeGroupName PoolName 100.00m", "lvcreate --snapshot --name SnapshotName VolumeGroupName / ThinVolumeName", "lvs -o lv_name,origin LV Origin PoolName SnapshotName ThinVolumeName ThinVolumeName", "lvs -o lv_name,vg_name,lv_path LV VG Path ThinPoolName VolumeGroupName ThinSnapshotName VolumeGroupName /dev/VolumeGroupName/ThinSnapshotName ThinVolumeName VolumeGroupName /dev/VolumeGroupName/ThinVolumeName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/ThinVolumeName", "umount /ThinLogicalVolume/MountPoint", "lvchange --activate n VolumeGroupName / ThinLogicalVolumeName", "lvconvert --mergethin VolumeGroupName/ThinSnapshotName", "umount /ThinLogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --type cache-pool --name CachePoolName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type cache --cachepool VolumeGroupName / CachePoolName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CachePoolName_cpool]", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --name CacheVolumeName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type writecache --cachevol CacheVolumeName VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CacheVolumeName_cvol]", "lvs -o lv_name,pool_lv,vg_name LV Pool VG LogicalVolumeName [CacheVolumeName_cvol] VolumeGroupName", "lvconvert --splitcache VolumeGroupName/LogicalVolumeName", "lvconvert --uncache VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv", "vgs -o vg_name VG VolumeGroupName", "lsblk", "lvcreate --name ThinPoolDataName --size Size VolumeGroupName /DevicePath", "lvcreate --name ThinPoolMetadataName --size Size VolumeGroupName /DevicePath", "lvconvert --type thin-pool --poolmetadata ThinPoolMetadataName VolumeGroupName/ThinPoolDataName", "lvs -o lv_name,seg_type LV Type ThinPoolDataName thin-pool", "lvcreate -s rhel/root -kn -n root_snapshot_before_changes Logical volume \"root_snapshot_before_changes\" created.", "lvcreate -s rhel/root -n root_snapshot_before_changes -L 25g Logical volume \"root_snapshot_before_changes\" created.", "grub2-mkconfig > /boot/grub2/grub.cfg Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-1160.118.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-1160.118.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-1160.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-1160.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-f9f6209866c743739757658d1a4850b2 Found initrd image: /boot/initramfs-0-rescue-f9f6209866c743739757658d1a4850b2.img done", "boom profile create --from-host --uname-pattern el7 Created profile with os_id f150f3d: OS ID: \"f150f3d6693495254255d46e20ecf5c690ec3262\", Name: \"Red Hat Enterprise Linux Server\", Short name: \"rhel\", Version: \"7.9 (Maipo)\", Version ID: \"7.9\", Kernel pattern: \"/vmlinuz-%{version}\", Initramfs pattern: \"/initramfs-%{version}.img\", Root options (LVM2): \"rd.lvm.lv=%{lvm_root_lv}\", Root options (BTRFS): \"rootflags=%{btrfs_subvolume}\", Options: \"root=%{root_device} ro %{root_opts}\", Title: \"%{os_name} %{os_version_id} (%{version})\", Optional keys: \"grub_users grub_arg grub_class id\", UTS release pattern: \"el7\"", "boom create --backup --title \"Root LV snapshot before changes\" --rootlv rhel/ root_snapshot_before_changes Created entry with boot_id bfef767: title Root LV snapshot before changes machine-id 7d70d7fcc6884be19987956d0897da31 version 3.10.0-1160.114.2.el7.x86_64 linux /vmlinuz-3.10.0-1160.114.2.el7.x86_64.boom0 initrd /initramfs-3.10.0-1160.114.2.el7.x86_64.img.boom0 options root=/dev/rhel/root_snapshot_before_changes ro rd.lvm.lv=rhel/root_snapshot_before_changes grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "leapp upgrade ==> Processing phase `configuration_phase` ====> * ipu_workflow_config IPU workflow config actor ==> Processing phase `FactsCollection` ============================================================ REPORT OVERVIEW ============================================================ Upgrade has been inhibited due to the following problems: 1. Btrfs has been removed from RHEL8 2. Missing required answers in the answer file HIGH and MEDIUM severity reports: 1. Packages available in excluded repositories will not be installed 2. GRUB core will be automatically updated during the upgrade 3. Difference in Python versions and support in RHEL 8 4. chrony using default configuration Reports summary: Errors: 0 Inhibitors: 2 HIGH severity reports: 3 MEDIUM severity reports: 1 LOW severity reports: 3 INFO severity reports: 4 Before continuing consult the full report: A report has been generated at /var/log/leapp/leapp-report.json A report has been generated at /var/log/leapp/leapp-report.txt ============================================================ END OF REPORT OVERVIEW ============================================================", "leapp upgrade --reboot ==> Processing phase `configuration_phase` ====> * ipu_workflow_config IPU workflow config actor ==> Processing phase `FactsCollection`", "reboot", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 root=/dev/rhel/root_snapshot_before_changes ro rd.lvm.lv=rhel/root_snapshot_before_changes", "boom list WARNING - Options for BootEntry(boot_id=cae29bf) do not match OsProfile: marking read-only BootID Version Name RootDevice e0252ad 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root_snapshot_before_changes 611ad14 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root 3bfed71- 3.10.0-1160.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root _cae29bf 4.18.0-513.24.1.el8_9.x86_64 Red Hat Enterprise Linux /dev/mapper/rhel-root", "boom delete --boot-id e0252ad Deleted 1 entry", "lvremove rhel/ root_snapshot_before_changes Do you really want to remove active logical volume rhel/root_snapshot_before_changes ? [y/n]: y Logical volume \" root_snapshot_before_changes \" successfully removed", "lvconvert --merge rhel/ root_snapshot_before_changes Logical volume rhel/root_snapshot_before_changes contains a filesystem in use. Delaying merge since snapshot is open. Merging of thin snapshot rhel/root_snapshot_before_changes will occur on next activation of rhel/root.", "boom create --backup --title \"RHEL Rollback\" --rootlv rhel/root Created entry with boot_id 1e6d298 : title RHEL Rollback machine-id f9f6209866c743739757658d1a4850b2 version 3.10.0-1160.118.1.el7.x86_64 linux /vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 initrd /initramfs-3.10.0-1160.118.1.el7.x86_64.img.boom0 options root=/dev/rhel/root ro rd.lvm.lv=rhel/root grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "reboot", "rm -f /boot/loader/entries/*.el8*", "rm -f /boot/*.el8*", "grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 . done", "new-kernel-pkg --update USD(uname -r)", "boom list -o+title BootID Version Name RootDevice Title a49fb09 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root Red Hat Enterprise Linux (3.10.0-1160.118.1.el7.x86_64) 8.9 (Ootpa) 1bb11e4 3.10.0-1160.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root Red Hat Enterprise Linux (3.10.0-1160.el7.x86_64) 8.9 (Ootpa) e0252ad 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root_snapshot_before_changes Root LV snapshot before changes 1e6d298 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root RHEL Rollback", "boom delete e0252ad Deleted 1 entry boom delete 1e6d298 Deleted 1 entry", "pvs PV VG Fmt Attr PSize PFree /dev/vdb1 VolumeGroupName lvm2 a-- 17.14G 17.14G /dev/vdb2 VolumeGroupName lvm2 a-- 17.14G 17.09G /dev/vdb3 VolumeGroupName lvm2 a-- 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free PV PSize PFree /dev/vdb1 17.14G 17.14G /dev/vdb2 17.14G 17.09G /dev/vdb3 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free -O pv_free PV PSize PFree /dev/vdb2 17.14G 17.09G /dev/vdb1 17.14G 17.14G /dev/vdb3 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free -O -pv_free PV PSize PFree /dev/vdb1 17.14G 17.14G /dev/vdb3 17.14G 17.14G /dev/vdb2 17.14G 17.09G", "vgs myvg VG #PV #LV #SN Attr VSize VFree myvg 1 1 0 wz-n <931.00g <930.00g", "pvs --units g /dev/vdb PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 931.00g 930.00g", "pvs --units G /dev/vdb PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 999.65G 998.58G", "pvs --units s PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 1952440320S 1950343168S", "pvs --units 4m PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 238335.00U 238079.00U", "lvs_cols=\"lv_name,vg_name,lv_attr\"", "compact_output = 1", "units = \"G\"", "report { }", "lvmconfig --typeconfig diff", "pvs -S name=~nvme PV Fmt Attr PSize PFree /dev/nvme2n1 lvm2 --- 1.00g 1.00g", "pvs -S vg_name=myvg PV VG Fmt Attr PSize PFree /dev/vdb1 myvg lvm2 a-- 1020.00m 396.00m /dev/vdb2 myvg lvm2 a-- 1020.00m 896.00m", "lvs -S 'size > 100m && size < 200m' LV VG Attr LSize Cpy%Sync rr myvg rwi-a-r--- 120.00m 100.00", "lvs -S name=~lvol[02] LV VG Attr LSize lvol0 myvg -wi-a----- 100.00m lvol2 myvg -wi------- 100.00m", "lvs -S segtype=raid1 LV VG Attr LSize Cpy%Sync rr myvg rwi-a-r--- 120.00m 100.00", "lvchange --addtag mytag -S active=1 Logical volume myvg/mylv changed. Logical volume myvg/lvol0 changed. Logical volume myvg/lvol1 changed. Logical volume myvg/rr changed.", "lvs -a -o lv_name,vg_name,attr,size,pool_lv,origin,role -S 'name!~_pmspare' LV VG Attr LSize Pool Origin Role thin1 example Vwi-a-tz-- 2.00g tp public,origin,thinorigin thin1s example Vwi---tz-- 2.00g tp thin1 public,snapshot,thinsnapshot thin2 example Vwi-a-tz-- 3.00g tp public tp example twi-aotz-- 1.00g private [tp_tdata] example Twi-ao---- 1.00g private,thin,pool,data [tp_tmeta] example ewi-ao---- 4.00m private,thin,pool,metadata", "lvchange --setactivationskip n -S 'role=thinsnapshot && origin=thin1' Logical volume myvg/thin1s changed.", "lvs -a -S 'name=~_tmeta && role=metadata && size <= 4m' LV VG Attr LSize [tp_tmeta] myvg ewi-ao---- 4.00m", "filter = [ \"r|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "vgchange --systemid <VM_system_id> <VM_vg_name>", "filter = [ \"r|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "vgchange --systemid <system_id> <vg_name>", "filter = [ \"a|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "filter = [\"a|^ path_to_device USD|\" ]", "use_lvmlockd=1", "--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg Logical volume \" my_lv \" created.", "lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lsblk'", "lvcreate --type raid0 -L 2G --stripes 3 --stripesize 4 -n mylv my_vg Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB(513 extents). Logical volume \" mylv \" created.", "mkfs.ext4 /dev/my_vg/mylv", "mount /dev/my_vg/mylv /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/my_vg-mylv 2002684 6168 1875072 1% /mnt", "lvs -a -o +devices,segtype my_vg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type mylv my_vg rwi-a-r--- 2.00g mylv_rimage_0(0),mylv_rimage_1(0),mylv_rimage_2(0) raid0 [mylv_rimage_0] my_vg iwi-aor--- 684.00m /dev/sdf1(0) linear [mylv_rimage_1] my_vg iwi-aor--- 684.00m /dev/sdg1(0) linear [mylv_rimage_2] my_vg iwi-aor--- 684.00m /dev/sdh1(0) linear", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs raid_level: raid0 raid_stripe_size: \"256 KiB\" state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'", "lvcreate --type raid1 --raidintegrity y -L 256M -n test-lv my_vg Creating integrity metadata LV test-lv_rimage_0_imeta with size 8.00 MiB. Logical volume \" test-lv_rimage_0_imeta \" created. Creating integrity metadata LV test-lv_rimage_1_imeta with size 8.00 MiB. Logical volume \" test-lv_rimage_1_imeta \" created. Logical volume \" test-lv \" created.", "lvconvert --raidintegrity y my_vg/test-lv", "lvconvert --raidintegrity n my_vg/test-lv Logical volume my_vg/test-lv has removed integrity.", "lvs -a my_vg LV VG Attr LSize Origin Cpy%Sync test-lv my_vg rwi-a-r--- 256.00m 2.10 [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 93.75 [test-lv_rimage_0_imeta] my_vg ewi-ao---- 8.00m [test-lv_rimage_0_iorig] my_vg -wi-ao---- 256.00m [test-lv_rimage_1] my_vg gwi-aor--- 256.00m [test-lv_rimage_1_iorig] 85.94 [...]", "lvs -a my-vg -o+segtype LV VG Attr LSize Origin Cpy%Sync Type test-lv my_vg rwi-a-r--- 256.00m 87.96 raid1 [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 100.00 integrity [test-lv_rimage_0_imeta] my_vg ewi-ao---- 8.00m linear [test-lv_rimage_0_iorig] my_vg -wi-ao---- 256.00m linear [test-lv_rimage_1] my_vg gwi-aor--- 256.00m [test-lv_rimage_1_iorig] 100.00 integrity [...]", "lvs -o+integritymismatches my_vg/test-lv_rimage_0 LV VG Attr LSize Origin Cpy%Sync IntegMismatches [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 100.00 0", "lvcreate --type raid5 -i 3 -L 500M -n my_lv my_vg Using default stripesize 64.00 KiB. Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents). Logical volume \"my_lv\" created.", "lvs -a -o +devices,segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0),my_lv_rimage_3(0) raid5 [my_lv_rimage_0] my_vg iwi-aor--- 168.00m /dev/sda(1) linear", "lvconvert --type raid6 my_vg/my_lv Using default stripesize 64.00 KiB. Replaced LV type raid6 (same as raid6_zr) with possible type raid6_ls_6. Repeat this command to convert to raid6 after an interim conversion has finished. Are you sure you want to convert raid5 LV my_vg/my_lv to raid6_ls_6 type? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert --type raid6 my_vg/my_lv", "lvs -a -o +devices,segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0),my_lv_rimage_3(0),my_lv_rimage_4(0) raid6 [my_lv_rimage_0] my_vg iwi-aor--- 172.00m /dev/sda(1) linear", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)", "lvconvert --type raid1 -m 1 my_vg/my_lv Are you sure you want to convert linear LV my_vg/my_lv to raid1 with 2 images enhancing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to type linear losing all resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m0 my_vg/my_lv /dev/sde1", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdf1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)", "lvconvert --type raid1 my_vg/my_lv Are you sure you want to convert mirror LV my_vg/my_lv to raid1 type? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m 2 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to 3 images enhancing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m 2 my_vg/my_lv /dev/sdd1", "lvconvert -m1 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to 2 images reducing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m1 my_vg/my_lv /dev/sde1", "lvs -a -o name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdd1(1) [my_lv_rimage_1] /dev/sde1(1) [my_lv_rimage_2] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sdd1(0) [my_lv_rmeta_1] /dev/sde1(0) [my_lv_rmeta_2] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert --splitmirror 1 -n new my_vg/my_lv Are you sure you want to split raid1 LV my_vg/my_lv losing all resilience? [y/n]: y", "lvconvert --splitmirror 1 -n new my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \" my_lv \" created", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_2 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_2' to merge back into my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvconvert --merge my_vg/my_lv_rimage_1 my_vg/my_lv_rimage_1 successfully merged back into my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdb: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] [unknown](1) [my_lv_rimage_1] /dev/sdc1(1) [...]", "vi /etc/lvm/lvm.conf raid_fault_policy = \"allocate\"", "lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdc1(1) [lv_rimage_2] /dev/sdd1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdc1(0) [lv_rmeta_2] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vi /etc/lvm/lvm.conf # This configuration option has an automatic default value. raid_fault_policy = \"warn\"", "grep lvm /var/log/messages Apr 14 18:48:59 virt-506 kernel: sd 25:0:0:0: rejecting I/O to offline device Apr 14 18:48:59 virt-506 kernel: I/O error, dev sdb, sector 8200 op 0x1:(WRITE) flags 0x20800 phys_seg 0 prio class 2 [...] Apr 14 18:48:59 virt-506 dmeventd[91060]: WARNING: VG my_vg is missing PV 9R2TVV-bwfn-Bdyj-Gucu-1p4F-qJ2Q-82kCAF (last written to /dev/sdb). Apr 14 18:48:59 virt-506 dmeventd[91060]: WARNING: Couldn't find device with uuid 9R2TVV-bwfn-Bdyj-Gucu-1p4F-qJ2Q-82kCAF. Apr 14 18:48:59 virt-506 dmeventd[91060]: Use 'lvconvert --repair my_vg/ly_lv' to replace failed device.", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \"my_lv\" created", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdb2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdb2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvconvert --replace /dev/sdb2 my_vg/my_lv", "lvconvert --replace /dev/sdb1 my_vg/my_lv /dev/sdd1", "lvconvert --replace /dev/sdb1 --replace /dev/sdc1 my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rimage_2] /dev/sde1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0) [my_lv_rmeta_2] /dev/sde1(0)", "lvs --all --options name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] [unknown](1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] [unknown](0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --repair my_vg/my_lv /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in my_vg/my_lv successfully replaced.", "lvconvert --repair my_vg/my_lv replacement_pv", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address /dev/sdc1: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. LV Cpy%Sync Devices my_lv 43.79 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vgreduce --removemissing my_vg", "pvscan PV /dev/sde1 VG rhel_virt-506 lvm2 [<7.00 GiB / 0 free] PV /dev/sdb1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free]", "lvs --all --options name,copy_percent,devices my_vg my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvchange --maxrecoveryrate 4K my_vg/my_lv Logical volume _my_vg/my_lv_changed.", "lvchange --syncaction repair my_vg/my_lv", "lvchange --syncaction check my_vg/my_lv", "lvchange --syncaction repair my_vg/my_lv", "lvs -o +raid_sync_action,raid_mismatch_count my_vg/my_lv LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert SyncAction Mismatches my_lv my_vg rwi-a-r--- 500.00m 100.00 idle 0", "lvcreate --type raid5 -i 2 -L 500M -n my_lv my_vg Using default stripesize 64.00 KiB. Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents). Logical volume \"my_lv\" created.", "lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] my_vg iwi-aor--- 252.00m /dev/sda(1) [my_lv_rimage_1] my_vg iwi-aor--- 252.00m /dev/sdb(1) [my_lv_rimage_2] my_vg iwi-aor--- 252.00m /dev/sdc(1) [my_lv_rmeta_0] my_vg ewi-aor--- 4.00m /dev/sda(0) [my_lv_rmeta_1] my_vg ewi-aor--- 4.00m /dev/sdb(0) [my_lv_rmeta_2] my_vg ewi-aor--- 4.00m /dev/sdc(0)", "lvs -o stripes my_vg/my_lv #Str 3", "lvs -o stripesize my_vg/my_lv Stripe 64.00k", "lvconvert --stripes 3 my_vg/my_lv Using default stripesize 64.00 KiB. WARNING: Adding stripes to active logical volume my_vg/my_lv will grow it from 126 to 189 extents! Run \"lvresize -l126 my_vg/my_lv\" to shrink it or use the additional capacity. Are you sure you want to add 1 images to raid5 LV my_vg/my_lv? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert --stripesize 128k my_vg/my_lv Converting stripesize 64.00 KiB of raid5 LV my_vg/my_lv to 128.00 KiB. Are you sure you want to convert raid5 LV my_vg/my_lv? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvchange --maxrecoveryrate 4M my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --minrecoveryrate 1M my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --syncaction check my_vg/my_lv", "lvchange --writemostly /dev/sdb my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --writebehind 100 my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvs -o stripes my_vg/my_lv #Str 4", "lvs -o stripesize my_vg/my_lv Stripe 128.00k", "lvs -a -o +raid_max_recovery_rate LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert MaxSync my_lv my_vg rwi-a-r--- 10.00g 100.00 4096 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvs -a -o +raid_min_recovery_rate LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert MinSync my_lv my_vg rwi-a-r--- 10.00g 100.00 1024 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert my_lv my_vg rwi-a-r--- 10.00g 2.66 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvcreate --type raid1 -m 1 -L 10G test Logical volume \"lvol0\" created.", "lvs -a -o +devices,region_size LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Region lvol0 test rwi-a-r--- 10.00g 100.00 lvol0_rimage_0(0),lvol0_rimage_1(0) 2.00m [lvol0_rimage_0] test iwi-aor--- 10.00g /dev/sde1(1) 0 [lvol0_rimage_1] test iwi-aor--- 10.00g /dev/sdf1(1) 0 [lvol0_rmeta_0] test ewi-aor--- 4.00m /dev/sde1(0) 0 [lvol0_rmeta_1] test ewi-aor--- 4.00m", "cat /etc/lvm/lvm.conf | grep raid_region_size Configuration option activation/raid_region_size. # raid_region_size = 2048", "lvconvert -R 4096K my_vg/my_lv Do you really want to change the region_size 512.00 KiB of LV my_vg/my_lv to 4.00 MiB? [y/n]: y Changed region size on RAID LV my_vg/my_lv to 4.00 MiB.", "lvchange --resync my_vg/my_lv Do you really want to deactivate logical volume my_vg/my_lv to resync it? [y/n]: y", "lvs -a -o +devices,region_size LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Region lvol0 test rwi-a-r--- 10.00g 6.25 lvol0_rimage_0(0),lvol0_rimage_1(0) 4.00m [lvol0_rimage_0] test iwi-aor--- 10.00g /dev/sde1(1) 0 [lvol0_rimage_1] test iwi-aor--- 10.00g /dev/sdf1(1) 0 [lvol0_rmeta_0] test ewi-aor--- 4.00m /dev/sde1(0) 0", "cat /etc/lvm/lvm.conf | grep raid_region_size Configuration option activation/raid_region_size. # raid_region_size = 4096", "filter = [ \"a|.*|\" ]", "filter = [ \"r|^/dev/cdromUSD|\" ]", "filter = [ \"a|loop|\", \"r|.*|\" ]", "filter = [ \"a|loop|\", \"a|/dev/sd.*|\", \"r|.*|\" ]", "filter = [ \"a|^/dev/sda8USD|\", \"r|.*|\" ]", "filter = [ \"a|/dev/disk/by-id/<disk-id>.|\", \"a|/dev/mapper/mpath.|\", \"r|.*|\" ]", "lvs --config 'devices{ filter = [ \"a|/dev/emcpower. |\", \"r| .|\" ] }'", "filter = [ \"a|/dev/emcpower.*|\", \"r|*.|\" ]", "dracut --force --verbose", "vgcreate <vg_name> <PV>", "lvcreate -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]", "lvcreate -n lv1 -L1G vg /dev/sda", "lvcreate -n lv2 L1G vg /dev/sda /dev/sdb", "lvcreate -n lv3 -L1G vg", "lvcreate --type <segment_type> -m <mirror_images> -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]", "lvcreate --type raid1 -m 1 -n lv4 -L1G vg /dev/sda /dev/sdb", "lvcreate --type raid1 -m 2 -n lv5 -L1G vg /dev/sda /dev/sdb /dev/sdc", "pvchange -x n /dev/sdk1", "lvs @database", "lvm tags", "pvchange --addtag <@tag> <PV>", "vgchange --addtag <@tag> <VG>", "vgcreate --addtag <@tag> <VG>", "lvchange --addtag <@tag> <LV>", "lvcreate --addtag <@tag>", "pvchange --deltag @tag PV", "vgchange --deltag @tag VG", "lvchange --deltag @tag LV", "tags { tag1 { } tag2 { host_list = [\"host1\"] } }", "activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }", "tags { hosttags = 1 }", "lvmdump", "lvs -v", "pvs --all", "dmsetup info --columns", "lvmconfig", "vgs --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. VG #PV #LV #SN Attr VSize VFree Devices myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](0) myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](5120),/dev/vdb1(0)", "lvs --all --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices mylv myvg -wi-a---p- 20.00g [unknown](0) [unknown](5120),/dev/sdc1(0)", "pvs Error reading device /dev/sdc1 at 0 length 4. Error reading device /dev/sdc1 at 4096 length 4. Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. PV VG Fmt Attr PSize PFree /dev/sda2 rhel_bp-01 lvm2 a-- <464.76g 4.00m /dev/sdb1 myvg lvm2 a-- <836.69g 736.68g /dev/sdd1 myvg lvm2 a-- <836.69g <836.69g /dev/sde1 myvg lvm2 a-- <836.69g <836.69g [unknown] myvg lvm2 a-m <836.69g 736.68g", "lvs -a --options name,vgname,attr,size,devices myvg Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. LV VG Attr LSize Devices my_raid1 myvg rwi-a-r-p- 100.00g my_raid1_rimage_0(0),my_raid1_rimage_1(0) [my_raid1_rimage_0] myvg iwi-aor--- 100.00g /dev/sdb1(1) [my_raid1_rimage_1] myvg Iwi-aor-p- 100.00g [unknown](1) [my_raid1_rmeta_0] myvg ewi-aor--- 4.00m /dev/sdb1(0) [my_raid1_rmeta_1] myvg ewi-aor-p- 4.00m [unknown](0)", "vgchange --activate y --partial myvg", "vgreduce --removemissing --test myvg", "vgreduce --removemissing --force myvg", "vgcfgrestore myvg", "cat /etc/lvm/archive/ myvg_00000-1248998876 .vg", "lvs --all --options +devices Couldn't find device with uuid ' FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk '.", "vgchange --activate n --partial myvg PARTIAL MODE. Incomplete logical volumes will be processed. WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s . WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/vdb1 ). 0 logical volume(s) in volume group \" myvg \" now active", "pvcreate --uuid physical-volume-uuid \\ --restorefile /etc/lvm/archive/ volume-group-name_backup-number .vg \\ block-device", "pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" \\ --restorefile /etc/lvm/archive/VG_00050.vg \\ /dev/vdb1 Physical volume \"/dev/vdb1\" successfully created", "vgcfgrestore myvg Restored volume group myvg", "lvs --all --options +devices myvg", "LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)", "lvchange --resync myvg/mylv", "lvchange --activate y myvg/mylv", "lvs --all --options +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)", "Insufficient free extents", "vgdisplay myvg", "--- Volume group --- VG Name myvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 6 VG Access read/write [...] Free PE / Size 8780 / 34.30 GB", "lvcreate --extents 8780 --name mylv myvg", "lvcreate --extents 100%FREE --name mylv myvg", "vgs --options +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext myvg 2 1 0 wz--n- 34.30G 0 0 8780", "pvck --dump metadata <disk>", "pvck --dump metadata /dev/sdb metadata text at 172032 crc Oxc627522f # vgname test segno 59 --- <raw metadata from disk> ---", "pvck --dump metadata_all <disk>", "pvck --dump metadata_all /dev/sdb metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv", "pvck --dump metadata_search <disk>", "pvck --dump metadata_search /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv", "pvck --dump metadata -v <disk>", "pvck --dump metadata -v /dev/sdb metadata text at 199680 crc 0x628cf243 # vgname my_vg seqno 40 --- my_vg { id = \"dmEbPi-gsgx-VbvS-Uaia-HczM-iu32-Rb7iOf\" seqno = 40 format = \"lvm2\" status = [\"RESIZEABLE\", \"READ\", \"WRITE\"] flags = [] extent_size = 8192 max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = \"8gn0is-Hj8p-njgs-NM19-wuL9-mcB3-kUDiOQ\" device = \"/dev/sda\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405e635dbaab125476d88030a196\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 } pv1 { id = \"E9qChJ-5ElL-HVEp-rc7d-U5Fg-fHxL-2QLyID\" device = \"/dev/sdb\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405f3f9396fddcd4012a50029a90\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 }", "pvck --dump metadata_search --settings metadata_offset=5632 -f meta.txt /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv head -2 meta.txt test { id = \"FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv\"", "pvcreate --restorefile <metadata-file> --uuid <UUID> <disk>", "pvck --dump headers <disk>", "vgcfgrestore --file <metadata-file> <vg-name>", "pvck --dump metadata <disk>", "vgs", "pvck --repair -f <metadata-file> <disk>", "vgs <vgname>", "pvs <pvname>", "lvs <lvname>", "lvchange --maxrecoveryrate 4K my_vg/my_lv Logical volume _my_vg/my_lv_changed.", "lvchange --syncaction repair my_vg/my_lv", "lvchange --syncaction check my_vg/my_lv", "lvchange --syncaction repair my_vg/my_lv", "lvs -o +raid_sync_action,raid_mismatch_count my_vg/my_lv LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert SyncAction Mismatches my_lv my_vg rwi-a-r--- 500.00m 100.00 idle 0", "lvs --all --options name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] [unknown](1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] [unknown](0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --repair my_vg/my_lv /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in my_vg/my_lv successfully replaced.", "lvconvert --repair my_vg/my_lv replacement_pv", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address /dev/sdc1: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. LV Cpy%Sync Devices my_lv 43.79 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vgreduce --removemissing my_vg", "pvscan PV /dev/sde1 VG rhel_virt-506 lvm2 [<7.00 GiB / 0 free] PV /dev/sdb1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free]", "lvs --all --options name,copy_percent,devices my_vg my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sdd not /dev/sdf", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/mapper/mpatha not /dev/mapper/mpathc Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowera not /dev/emcpowerh", "filter = [ \"a|/dev/sda2USD|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/cciss/.*|\", \"a|/dev/emcpower.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/hda.*|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_and_managing_logical_volumes/index
5.10. Determining Device Mapper Entries with the dmsetup Command
5.10. Determining Device Mapper Entries with the dmsetup Command You can use the dmsetup command to find out which device mapper entries match the multipathed devices. The following command displays all the device mapper devices and their major and minor numbers. The minor numbers determine the name of the dm device. For example, a minor number of 3 corresponds to the multipathed device /dev/dm-3 .
[ "dmsetup ls mpathd (253:4) mpathep1 (253:12) mpathfp1 (253:11) mpathb (253:3) mpathgp1 (253:14) mpathhp1 (253:13) mpatha (253:2) mpathh (253:9) mpathg (253:8) VolGroup00-LogVol01 (253:1) mpathf (253:7) VolGroup00-LogVol00 (253:0) mpathe (253:6) mpathbp1 (253:10) mpathd (253:5)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/dmsetup_queries
Chapter 1. Overview of authentication and authorization
Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization. authentication An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. manual mode In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). mint mode Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. passthrough mode In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.3. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/overview-of-authentication-authorization
probe::socket.receive
probe::socket.receive Name probe::socket.receive - Message received on a socket. Synopsis Values success Was send successful? (1 = yes, 0 = no) protocol Protocol value flags Socket flags value name Name of this probe state Socket state value size Size of message received (in bytes) or error code if success = 0 type Socket type value family Protocol family value Context The message receiver
[ "socket.receive" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-receive
Chapter 5. Capability Trimming in JBoss EAP for OpenShift
Chapter 5. Capability Trimming in JBoss EAP for OpenShift When building an image that includes JBoss EAP, you can control the JBoss EAP features and subsystems to include in the image. The default JBoss EAP server included in S2I images includes the complete server and all features. You might want to trim the capabilities included in the provisioned server. For example, you might want to reduce the security exposure of the provisioned server, or you might want to reduce the memory footprint so it is more appropriate for a microservice container. 5.1. Provision a custom JBoss EAP server To provision a custom server with trimmed capabilities, pass the GALLEON_PROVISION_LAYERS environment variable during the S2I build phase. The value of the environment variable is a comma-separated list of the layers to provision to build the server. For example, if you specify the environment variable as GALLEON_PROVISION_LAYERS=jaxrs-server,sso , a JBoss EAP server is provisioned with the following capabilities: A servlet container The ability to configure a datasource The jaxrs , weld , and jpa subsystems Red Hat SSO integration 5.2. Available JBoss EAP Layers Red Hat makes available six layers to customize provisioning of the JBoss EAP server in OpenShift. Three layers are base layers that provide core functionality. Three are decorator layers that enhance the base layers. The following Jakarta EE specifications are not supported in any provisioning layer: Jakarta Server Faces 2.3 Jakarta Enterprise Beans 3.2 Jakarta XML Web Services 2.3 5.2.1. Base Layers Each base layer includes core functionality for a typical server user case. datasources-web-server This layer includes a servlet container and the ability to configure a datasource. The following are the JBoss EAP subsystems included by default in the datasources-web-server : core-management datasources deployment-scanner ee elytron io jca jmx logging naming request-controller security-manager transactions undertow The following Jakarta EE specifications are supported in this layer: Jakarta JSON Processing 1.1 Jakarta JSON Binding 1.0 Jakarta Servlet 4.0 Jakarta Expression Language 3.0 Jakarta Server Pages 2.3 Jakarta Standard Tag Library 1.2 Jakarta Concurrency 1.1 Jakarta Annotations 1.3 Jakarta XML Binding 2.3 Jakarta Debugging Support for Other Languages 1.0 Jakarta Transactions 1.3 Jakarta Connectors 1.7 jaxrs-server This layer enhances the datasources-web-server layer with the following JBoss EAP subsystems: jaxrs weld jpa This layer also adds Infinispan-based second-level entity caching locally in the container. The following Jakarta EE specifications are supported in this layer in addition to those supported in the datasources-web-server layer: Jakarta Contexts and Dependency Injection 2.0 Jakarta Bean Validation 2.0 Jakarta Interceptors 1.2 Jakarta RESTful Web Services 2.1 Jakarta Persistence 2.2 cloud-server This layer enhances the jaxrs-server layer with the following JBoss EAP subsystems: resource-adapters messaging-activemq (remote broker messaging, not embedded messaging) This layer also adds the following observability features to the jaxrs-server layer: Health subsystem Metrics subsystem The following Jakarta EE specification is supported in this layer in addition to those supported in the jaxrs-server layer: Jakarta Security 1.0 5.2.2. Decorator Layers Decorator layers are not used alone. You can configure one or more decorator layers with a base layer to deliver additional functionality. sso This decorator layer adds Red Hat Single Sign-On integration to the provisioned server. observability This decorator layer adds the following observability features to the provisioned server: Health subsystem Metrics subsystem Note This layer is built in to the cloud-server layer. You do not need to add this layer to the cloud-server layer. web-clustering This layer adds embedded Infinispan-based web session clustering to the provisioned server. 5.3. Provisioning User-developed Layers in JBoss EAP In addition to provisioning layers available from Red Hat, you can provision custom layers you develop. Procedure Build a custom layer using the Galleon Maven plugin. For more information, see Preparing the Maven project . Deploy the custom layer to an accessible Maven repository. You can use custom Galleon feature-pack environment variables to customize Galleon feature-packs and layers during the S2I image build process. For more information about customizing Galleon feature-packs and layers, see Using the custom Galleon feature-pack during S2I build . Optional: Create a custom provisioning file to reference the user-defined layer and supported JBoss EAP layers and store it in your application directory. For more information about creating a custom provisioning file, see Custom provisioning files for JBoss EAP . Run the S2I process to provision a JBoss EAP server in OpenShift. For more information, see Using the custom Galleon feature-pack during S2I build . 5.3.1. Building and using custom Galleon layers for JBoss EAP Custom Galleon layers are packaged inside a Galleon feature-pack that is designed to run with JBoss EAP 7.4. In Openshift, you can build and use a Galleon feature-pack that contains layers to provision, for example, a MariaDB driver and data source for the JBoss EAP 7.4 server. A layer contains the content that is installed in the server. A layer can update the server XML configuration file and add content to the server installation. This section documents how to build and use in OpenShift a Galleon feature-pack containing layers to provision a MariaDB driver and data source for the JBoss EAP 7.4 server. 5.3.1.1. Preparing the Maven project Galleon feature-packs are created using Maven. This procedure includes the steps to create a new Maven project. Procedure To create a new Maven project, run the following command: In the directory mariadb-galleon-pack , update the pom.xml file to include the Red Hat Maven repository: Update the pom.xml file to add dependencies on the EAP Galleon feature-pack and the MariaDB driver: Update the pom.xml file to include the Maven plugin that is used to build the Galleon feature-pack: 5.3.1.2. Adding the feature pack content This procedure helps you add layers to a custom Galleon feature-pack, for example, the feature-pack including the MariaDB driver and datasource layers. Prerequisites You have created a Maven project. For more details, see Preparing the Maven project . Procedure Create the directory, src/main/resources , within a custom feature-pack Maven project, for example, see Preparing the Maven project . This directory is the root directory containing the feature-pack content. Create the directory src/main/resources/modules/org/mariadb/jdbc/main . In the main directory, create a file named module.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <module name="org.mariadb.jdbc" xmlns="urn:jboss:module:1.8"> <resources> <artifact name="USD{org.mariadb.jdbc:mariadb-java-client}"/> 1 </resources> <dependencies> 2 <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> 1 The MariaDB driver groupId and artifactId . At provisioning time, the actual driver jar file gets installed. The version of the driver is referenced from the pom.xml file. 2 The JBoss Modules modules dependencies for the MariaDB driver. Create the directory src/main/resources/layers/standalone/ . This is the root directory of all the layers that the Galleon feature-pack is defining. Create the directory src/main/resources/layers/standalone/mariadb-driver . In the mariadb-driver directory, create the layer-spec.xml file with the following content: <?xml version="1.0" ?> <layer-spec xmlns="urn:jboss:galleon:layer-spec:1.0" name="mariadb-driver"> <feature spec="subsystem.datasources"> 1 <feature spec="subsystem.datasources.jdbc-driver"> <param name="driver-name" value="mariadb"/> <param name="jdbc-driver" value="mariadb"/> <param name="driver-xa-datasource-class-name" value="org.mariadb.jdbc.MariaDbDataSource"/> <param name="driver-module-name" value="org.mariadb.jdbc"/> </feature> </feature> <packages> 2 <package name="org.mariadb.jdbc"/> </packages> </layer-spec> 1 Update the datasources subsytem configuration with a JDBC-driver named MariaDB, implemented by the module org.mariadb.jdbc . 2 The JBoss Modules module containing the driver classes that are installed when the layer is provisioned. The mariadb-driver layer updates the datasources subsystem with the configuration of a JDBC driver, implemented by the JBoss Modules module. Create the directory src/main/resources/layers/standalone/mariadb-datasource . In the mariadb-datasource directory, create the layer-spec.xml file with the following content: <?xml version="1.0" ?> <layer-spec xmlns="urn:jboss:galleon:layer-spec:1.0" name="mariadb-datasource"> <dependencies> <layer name="mariadb-driver"/> 1 </dependencies> <feature spec="subsystem.datasources.data-source"> 2 <param name="data-source" value="MariaDBDS"/> <param name="jndi-name" value="java:jboss/datasources/USD{env.MARIADB_DATASOURCE:MariaDBDS}"/> <param name="connection-url" value="jdbc:mariadb://USD{env.MARIADB_HOST:localhost}:USD{env.MARIADB_PORT:3306}/USD{env.MARIADB_DATABASE}"/> 3 <param name="driver-name" value="mariadb"/> <param name="user-name" value="USD{env.MARIADB_USER}"/> 4 <param name="password" value="USD{env.MARIADB_PASSWORD}"/> </feature> </layer-spec> 1 This dependency enforces the provisioning of the MariaDB driver when the datasource is provisioned. All the layers a layer depends on are automatically provisioned when that layer is provisioned. 2 Update the datasources subsystem configuration with a datasource named MariaDBDS. 3 Datasource's name, host, port, and database values are resolved from the environment variables MARIADB_DATASOURCE , MARIADB_HOST , MARIADB_PORT , and MARIADB_DATABASE , which are set when the server is started. 4 User name and password values are resolved from the environment variables MARIADB_USER and MARIADB_PASSWORD . Build the Galleon feature-pack by running the following command: The file target/mariadb-galleon-pack-1.0-SNAPSHOT.zip is created. 5.3.1.3. Using the custom Galleon feature-pack during S2I build A custom feature-pack must be made available to the Maven build that occurs during OpenShift S2I build. This is usually achieved by deploying the custom feature-pack as an artifact, for example, org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT to an accessible Maven repository. In order to test the feature-pack before deployment, you can use the EAP S2I builder image capability that allows you to make use of a locally built Galleon feature-pack. Use the following procedure example to customize the todo-backend EAP quickstart with the use of MariaDB driver instead of PostgreSQL driver. Note For more information about the todo-backend EAP quickstart, see EAP quickstart . For more information about configuring the JBoss EAP S2I image for custom Galleon feature-pack usage, see Configure Galleon by using advanced environment variables . Prerequisites You have OpenShift command-line installed You are logged in to an OpenShift cluster You have installed the JBoss EAP OpenShift images in your cluster You have configured access to the Red Hat Container registry. For detailed information, see Red Hat Container Registry . You have created a custom Galleon feature-pack. For detailed information, see Preparing the Maven project . Procedure Start the MariaDB database by running the following command: The OpenShift service mariadb-101-rhel7 is created and started. Create a secret from the feature-pack ZIP archive, generated by the custom feature-pack Maven build, by running the following command within the Maven project directory mariadb-galleon-pack : The secret mariadb-galleon-pack is created. When initiating the S2I build, this secret is used to mount the feature-pack zip file in the pod, making the file available during the server provisioning phase. To create a new OpenShift build to build an application image containing the todo-backend quickstart deployment running inside a server trimmed with Galleon, run the following command: 1 The custom feature-pack environment variable that contains a comma separated list of feature-pack Maven coordinates, such as groupId:artifactId:version . 2 The set of Galleon layers that are used to provision the server. jaxrs-server is a base server layer and mariadb-datasource is the custom layer that brings the MariaDB driver and a new datasource to the server installation. 3 The location of the local Maven repository within the image that contains the MariaDB feature-pack. This repository is populated when mounting the secret inside the image. 4 The mariadb-galleon-pack secret is mounted in the /tmp/repo/org/example/mariadb/mariadb-galleon-pack/1.0-SNAPSHOT directory. To start a new build from the created OpenShift build, run the following command: After successful command execution, the image todos-app-build is created. To create a new deployment, provide the environment variables that are required to bind the datasource to the running MariaDB database by executing the following command: 1 The quickstart expects the datasource to be named ToDos Note For more details about the custom Galleon feature-pack environment variables, see Custom Galleon feature-pack environment variables To expose the todos-app application, run the following command: To create a new task, run the following command: To access the list of tasks, run the following command: The added task is displayed in a browser. 5.3.1.4. Custom Provisioning Files for JBoss EAP Custom provisioning files are XML files with the file name provisioning.xml that are stored in the galleon subdirectory. Using the provisioning.xml file is an alternative to the usage of GALLEON_PROVISION_FEATURE_PACKS and GALLEON_PROVISION_LAYERS environment variables. During S2I build, the provisioning.xml file is used to provision the custom EAP server. Important Do not create a custom provisioning file when using the GALLEON_PROVISION_LAYERS environment variable, because this environment variable configures the S2I build process to ignore the file. The following code illustrates a custom provisioning file. <?xml version="1.0" ?> <installation xmlns="urn:jboss:galleon:provisioning:3.0"> <feature-pack location="eap-s2i@maven(org.jboss.universe:s2i-universe)"> 1 <default-configs inherit="false"/> 2 <packages inherit="false"/> 3 </feature-pack> <feature-pack location="org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT"> 4 <default-configs inherit="false"/> <packages inherit="false"/> </feature-pack> <config model="standalone" name="standalone.xml"> 5 <layers> <include name="jaxrs-server"/> <include name="mariadb-datasource"/> </layers> </config> <options> 6 <option name="optional-packages" value="passive+"/> </options> </installation> 1 This element instructs the provisioning process to provision the current eap-s2i feature-pack. Note that a builder image includes only one feature pack. 2 This element instructs the provisioning process to exclude default configurations. 3 This element instructs the provisioning process to exclude default packages. 4 This element instructs the provisioning process to provision the org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT feature pack. The child elements instruct the process to exclude default configurations and default packages. 5 This element instructs the provisioning process to create a custom standalone configuration. The configuration includes the jaxrs-server base layer and the mariadb-datasource custom layer from the org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT feature pack. 6 This element instructs the provisioning process to optimize provisioning of JBoss EAP modules. Additional resources For more information about using the GALLEON_PROVISION_LAYERS environment variable, see Provision a Custom JBoss EAP server . 5.3.2. Configure Galleon by using advanced environment variables You can use advanced custom Galleon feature pack environment variables to customize the location where you store your custom Galleon feature packs and layers during the S2I image build process. These advanced custom Galleon feature pack environment variables are as follows: GALLEON_DIR=<path> , which overrides the default <project_root_dir>/galleon directory path to <project_root_dir>/<GALLEON_DIR> . GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=<path> , which overrides the <project root dir>/galleon/repository directory path with an absolute path to a Maven local repository cache directory. This repository contains custom Galleon feature packs. You must locate the Galleon feature pack archive files inside a sub-directory that is compliant with the Maven local-cache file system configuration. For example, locate the org.examples:my-feature-pack:1.0.0.Final feature pack inside the path-to-repository/org/examples/my-feature-pack/1.0.0.Final/my-feature-pack-1.0.0.Final.zip path. You can configure your Maven project settings by creating a settings.xml file in the <project_root>/<GALLEON_DIR> directory. The default value for GALLEON_DIR is <project_root_dir>/galleon . Maven uses the file to provision your custom Galleon feature packs for your application. If you do not create a settings.xml file, Maven uses a default settings.xml file that was created by the S2I image. Important Do not specify a local Maven repository location in a settings.xml file, because the S2I builder image specifies a location to your local Maven repository. The S2I builder image uses this location during the S2I build process. Additional resources For more information about custom Galleon feature pack environment variables, see custom Galleon feature pack environment variables . 5.3.3. Custom Galleon feature pack environment variables You can use any of the following custom Galleon feature pack environment variables to customize how you use your JBoss EAP S2I image. Table 5.1. Descriptions of custom Galleon feature pack environment variables Environment variable Description GALLEON_DIR=<path> Where <path> is a directory relative to the root directory of your application project. Your <path> directory contains your optional Galleon custom content, such as the settings.xml file and local Maven repository cache. This cache contains the custom Galleon feature packs. Directory defaults to galleon . GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=<path> <path> is the absolute path to a Maven local repository directory that contains custom feature packs. Directory defaults to galleon/repository . GALLEON_PROVISION_FEATURE_PACKS=<list_of_galleon_feature_packs> Where <list_of_galleon_feature_packs> is a comma-separated list of your custom Galleon feature packs identified by Maven coordinates. The listed feature packs must be compatible with the version of the JBoss EAP 7.4 server present in the builder image. You can use the GALLEON_PROVISION_LAYERS environment variable to set the Galleon layers, which were defined by your custom feature packs, for your server.
[ "mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=pom-root -DgroupId=org.example.mariadb -DartifactId=mariadb-galleon-pack -DinteractiveMode=false", "<repositories> <repository> <id>redhat-ga</id> <name>Redhat GA</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>", "<dependencies> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-ee-galleon-pack</artifactId> <version>7.4.4.GA-redhat-00011</version> <type>zip</type> </dependency> <dependency> <groupId>org.mariadb.jdbc</groupId> <artifactId>mariadb-java-client</artifactId> <version>3.0.5</version> </dependency> </dependencies>", "<build> <plugins> <plugin> <groupId>org.wildfly.galleon-plugins</groupId> <artifactId>wildfly-galleon-maven-plugin</artifactId> <version>5.2.11.Final</version> <executions> <execution> <id>mariadb-galleon-pack-build</id> <goals> <goal>build-user-feature-pack</goal> </goals> <phase>compile</phase> </execution> </executions> </plugin> </plugins> </build>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module name=\"org.mariadb.jdbc\" xmlns=\"urn:jboss:module:1.8\"> <resources> <artifact name=\"USD{org.mariadb.jdbc:mariadb-java-client}\"/> 1 </resources> <dependencies> 2 <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "<?xml version=\"1.0\" ?> <layer-spec xmlns=\"urn:jboss:galleon:layer-spec:1.0\" name=\"mariadb-driver\"> <feature spec=\"subsystem.datasources\"> 1 <feature spec=\"subsystem.datasources.jdbc-driver\"> <param name=\"driver-name\" value=\"mariadb\"/> <param name=\"jdbc-driver\" value=\"mariadb\"/> <param name=\"driver-xa-datasource-class-name\" value=\"org.mariadb.jdbc.MariaDbDataSource\"/> <param name=\"driver-module-name\" value=\"org.mariadb.jdbc\"/> </feature> </feature> <packages> 2 <package name=\"org.mariadb.jdbc\"/> </packages> </layer-spec>", "<?xml version=\"1.0\" ?> <layer-spec xmlns=\"urn:jboss:galleon:layer-spec:1.0\" name=\"mariadb-datasource\"> <dependencies> <layer name=\"mariadb-driver\"/> 1 </dependencies> <feature spec=\"subsystem.datasources.data-source\"> 2 <param name=\"data-source\" value=\"MariaDBDS\"/> <param name=\"jndi-name\" value=\"java:jboss/datasources/USD{env.MARIADB_DATASOURCE:MariaDBDS}\"/> <param name=\"connection-url\" value=\"jdbc:mariadb://USD{env.MARIADB_HOST:localhost}:USD{env.MARIADB_PORT:3306}/USD{env.MARIADB_DATABASE}\"/> 3 <param name=\"driver-name\" value=\"mariadb\"/> <param name=\"user-name\" value=\"USD{env.MARIADB_USER}\"/> 4 <param name=\"password\" value=\"USD{env.MARIADB_PASSWORD}\"/> </feature> </layer-spec>", "mvn clean install", "new-app -e MYSQL_USER=admin -e MYSQL_PASSWORD=admin -e MYSQL_DATABASE=mariadb registry.redhat.io/rhscl/mariadb-101-rhel7", "create secret generic mariadb-galleon-pack --from-file=target/mariadb-galleon-pack-1.0-SNAPSHOT.zip", "new-build jboss-eap74-openjdk11-openshift:latest~https://github.com/jboss-developer/jboss-eap-quickstarts#EAP_7.4.0.GA --context-dir=todo-backend --env=GALLEON_PROVISION_FEATURE_PACKS=\"org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT\" \\ 1 --env=GALLEON_PROVISION_LAYERS=\"jaxrs-server,mariadb-datasource\" \\ 2 --env=GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=\"/tmp/repo\" \\ 3 --env=MAVEN_ARGS_APPEND=\"-Dcom.redhat.xpaas.repo.jbossorg\" --build-secret=mariadb-galleon-pack:/tmp/repo/org/example/mariadb/mariadb-galleon-pack/1.0-SNAPSHOT \\ 4 --name=todos-app-build", "start-build todos-app-build", "new-app --name=todos-app todos-app-build --env=MARIADB_PORT=3306 --env=MARIADB_USER=admin --env=MARIADB_PASSWORD=admin --env=MARIADB_HOST=mariadb-101-rhel7 --env=MARIADB_DATABASE=mariadb --env=MARIADB_DATASOURCE=ToDos 1", "expose svc/todos-app", "curl -X POST http://USD(oc get route todos-app --template='{{ .spec.host }}') -H 'Content-Type: application/json' -d '{\"title\":\"todo1\"}'", "curl http://USD(oc get route todos-app --template='{{ .spec.host }}')", "<?xml version=\"1.0\" ?> <installation xmlns=\"urn:jboss:galleon:provisioning:3.0\"> <feature-pack location=\"eap-s2i@maven(org.jboss.universe:s2i-universe)\"> 1 <default-configs inherit=\"false\"/> 2 <packages inherit=\"false\"/> 3 </feature-pack> <feature-pack location=\"org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT\"> 4 <default-configs inherit=\"false\"/> <packages inherit=\"false\"/> </feature-pack> <config model=\"standalone\" name=\"standalone.xml\"> 5 <layers> <include name=\"jaxrs-server\"/> <include name=\"mariadb-datasource\"/> </layers> </config> <options> 6 <option name=\"optional-packages\" value=\"passive+\"/> </options> </installation>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/capability-trimming-eap-foropenshift_default
Config APIs
Config APIs OpenShift Container Platform 4.17 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/config_apis/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/proc_providing-feedback-on-red-hat-documentation_using-selinux
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/viewing-odf-topology_rhodf
Chapter 51. Infinispan Embedded
Chapter 51. Infinispan Embedded Since Camel 2.13 Both producer and consumer are supported This component allows you to interact with Infinispan distributed data grid / cache. Infinispan is an extremely scalable, highly available key / value data store and data grid platform written in Java. The camel-infinispan-embedded component includes the following features. Local Camel Consumer - Receives cache change notifications and sends them to be processed. This can be done synchronously or asynchronously, and is also supported with a replicated or distributed cache. Local Camel Producer - A producer creates and sends messages to an endpoint. The camel-infinispan producer uses GET , PUT , REMOVE , and CLEAR operations. The local producer is also supported with a replicated or distributed cache. The events are processed asynchronously. 51.1. Dependencies When using infinispan-embedded with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 51.2. URI format The producer allows sending messages to a local infinispan cache. The consumer allows listening for events from local infinispan cache. If no cache configuration is provided, embedded cacheContainer is created directly in the component. 51.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 51.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 51.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 51.4. Component Options The Infinispan Embedded component supports 20 options that are listed below. Name Description Default Type configuration (common) Component configuration. InfinispanEmbeddedConfiguration queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value* (producer) Set a specific value for producer operations. Object autowiredEnabled (advanced) Whether auto-wiring is enabled. This is used for automatic auto-wiring options (the option must be marked as auto-wired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 51.5. Endpoint Options The Infinispan Embedded endpoint is configured using URI syntax. Following are the path and query parameters. 51.5.1. Path Parameters (1 parameters) Name Description Default Type cacheName (common) Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. String 51.5.2. Query Parameters (20 parameters) Name Description Default Type queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 51.6. Message Headers The Infinispan Embedded component supports 22 message headers that are listed below. Name Description Default Type CamelInfinispanEventType (consumer) Constant: EVENT_TYPE The type of the received event. String CamelInfinispanIsPre (consumer) Constant: IS_PRE true if the notification is before the event has occurred, false if after the event has occurred. boolean CamelInfinispanCacheName (common) Constant: CACHE_NAME The cache participating in the operation or event. String CamelInfinispanKey (common) Constant: KEY The key to perform the operation to or the key generating the event. Object CamelInfinispanValue (producer) Constant: VALUE The value to use for the operation. Object CamelInfinispanDefaultValue (producer) Constant: DEFAULT_VALUE The default value to use for a getOrDefault. Object CamelInfinispanOldValue (producer) Constant: OLD_VALUE The old value to use for a replace. Object CamelInfinispanMap (producer) Constant: MAP A Map to use in case of CamelInfinispanOperationPutAll operation. Map CamelInfinispanOperation (producer) Constant: OPERATION The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC InfinispanOperation CamelInfinispanOperationResult (producer) Constant: RESULT The name of the header whose value is the result. String CamelInfinispanOperationResultHeader (producer) Constant: RESULT_HEADER Store the operation result in a header instead of the message body. String CamelInfinispanLifespanTime (producer) Constant: LIFESPAN_TIME The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. long CamelInfinispanTimeUnit (producer) Constant: LIFESPAN_TIME_UNIT The Time Unit of an entry Lifespan Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanMaxIdleTime (producer) Constant: MAX_IDLE_TIME The maximum amount of time an entry is allowed to be idle for before it is considered as expired. long CamelInfinispanMaxIdleTimeUnit (producer) Constant: MAX_IDLE_TIME_UNIT The Time Unit of an entry Max Idle Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanIgnoreReturnValues (consumer) Constant: IGNORE_RETURN_VALUES Signals that write operation's return value are ignored, so reading the existing value from a store or from a remote node is not necessary. false boolean CamelInfinispanEventData (consumer) Constant: EVENT_DATA The event data. Object CamelInfinispanQueryBuilder (producer) Constant: QUERY_BUILDER The QueryBuilder to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one. InfinispanQueryBuilder CamelInfinispanCommandRetried (consumer) Constant: COMMAND_RETRIED This will be true if the write command that caused this had to be retried again due to a topology change. boolean CamelInfinispanEntryCreated (consumer) Constant: ENTRY_CREATED Indicates whether the cache entry modification event is the result of the cache entry being created. boolean CamelInfinispanOriginLocal (consumer) Constant: ORIGIN_LOCAL true if the call originated on the local cache instance; false if originated from a remote one. boolean CamelInfinispanCurrentState (consumer) Constant: CURRENT_STATE True if this event is generated from an existing entry as the listener has Listener. boolean 51.7. Camel Operations This section lists all available operations along with their header information. Table 51.1. Table 1. Put Operations Operation Name Description InfinispanOperation.PUT Puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTASYNC Asynchronously puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTIFABSENT Puts a key/value pair in the cache if it did not exist, optionally with expiration InfinispanOperation.PUTIFABSENTASYNC Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 51.2. Table 2. Put All Operations Operation Name Description InfinispanOperation.PUTALL Adds multiple entries to a cache, optionally with expiration CamelInfinispanOperation.PUTALLASYNC Asynchronously adds multiple entries to a cache, optionally with expiration Required Headers : CamelInfinispanMap Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Table 51.3. Table 3. Get Operations Operation Name Description InfinispanOperation.GET Retrieves the value associated with a specific key from the cache InfinispanOperation.GETORDEFAULT Retrieves the value, or default value, associated with a specific key from the cache Required Headers : CamelInfinispanKey Table 51.4. Table 4. Contains Key Operation Operation Name Description InfinispanOperation.CONTAINSKEY Determines whether a cache contains a specific key Required Headers CamelInfinispanKey Result Header CamelInfinispanOperationResult Table 51.5. Table 5. Contains Value Operation Operation Name Description InfinispanOperation.CONTAINSVALUE Determines whether a cache contains a specific value Required Headers : CamelInfinispanKey Table 51.6. Table 6. Remove Operations Operation Name Description InfinispanOperation.REMOVE Removes an entry from a cache, optionally only if the value matches a given one InfinispanOperation.REMOVEASYNC Asynchronously removes an entry from a cache, optionally only if the value matches a given one Required Headers : CamelInfinispanKey Optional Headers : CamelInfinispanValue Result Header : CamelInfinispanOperationResult Table 51.7. Table 7. Replace Operations Operation Name Description InfinispanOperation.REPLACE Conditionally replaces an entry in the cache, optionally with expiration InfinispanOperation.REPLACEASYNC Asynchronously conditionally replaces an entry in the cache, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue CamelInfinispanOldValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 51.8. Table 8. Clear Operations Operation Name Description InfinispanOperation.CLEAR Clears the cache InfinispanOperation.CLEARASYNC Asynchronously clears the cache Table 51.9. Table 9. Size Operation Operation Name Description InfinispanOperation.SIZE Returns the number of entries in the cache Result Header CamelInfinispanOperationResult Table 51.10. Table 10. Stats Operation Operation Name Description InfinispanOperation.STATS Returns statistics about the cache Result Header : CamelInfinispanOperationResult Table 51.11. Table 11. Query Operation Operation Name Description InfinispanOperation.QUERY Executes a query on the cache Required Headers : CamelInfinispanQueryBuilder Result Header : CamelInfinispanOperationResult Note Write methods like put(key, value) and remove(key) do not return the value by default. 51.8. Examples Put a key/value into a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3) Set the operation to perform Set the key used to identify the element in the cache Use the configured cache manager cacheContainer from the registry to put an element to the cache named myCacheName It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example. from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName"); Set the lifespan of the entry Set the time unit for the lifespan Queries from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ; Custom Listeners from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result"); The instance of myCustomListener must exist and Camel should be able to look it up from the Registry . Users are encouraged to extend the org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener class and annotate the resulting class with @Listener which can be found in package org.infinispan.notifications . 51.9. Using the Infinispan based idempotent repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } }); Configure the cache Configure the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer idempotentRepository="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route 51.10. Using the Infinispan based aggregation repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository("aggregation"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy("myStrategy") .to("mock:result"); } }); Configure the cache Create the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate aggregationStrategy="myStrategy" completionSize="3" aggregationRepository="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route Note With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation. 51.11. Spring Boot Auto-Configuration The component supports 17 options that are listed below. Name Description Default Type camel.component.infinispan-embedded.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.infinispan-embedded.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.infinispan-embedded.cache-container Specifies the cache Container to connect. The option is a org.infinispan.manager.EmbeddedCacheManager type. EmbeddedCacheManager camel.component.infinispan-embedded.cache-container-configuration The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.configuration.cache.Configuration type. Configuration camel.component.infinispan-embedded.clustered-listener If true, the listener will be installed for the entire cluster. false Boolean camel.component.infinispan-embedded.configuration Component configuration. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration type. InfinispanEmbeddedConfiguration camel.component.infinispan-embedded.configuration-uri An implementation specific URI for the CacheManager. String camel.component.infinispan-embedded.custom-listener Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener type. InfinispanEmbeddedCustomListener camel.component.infinispan-embedded.enabled Whether to enable auto configuration of the infinispan-embedded component. This is enabled by default. Boolean camel.component.infinispan-embedded.event-types Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String camel.component.infinispan-embedded.flags A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String camel.component.infinispan-embedded.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.infinispan-embedded.operation The operation to perform. InfinispanOperation camel.component.infinispan-embedded.query-builder Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. InfinispanQueryBuilder camel.component.infinispan-embedded.remapping-function Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. BiFunction camel.component.infinispan-embedded.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String camel.component.infinispan-embedded.sync If true, the consumer will receive notifications synchronously. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "infinispan-embedded://cacheName?[options]", "infinispan-embedded:cacheName", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant(\"123\") (2) .to(\"infinispan:myCacheName&cacheContainer=#cacheContainer\"); (3)", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to(\"infinispan:myCacheName\");", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having(\"name\").like(\"%abc%\").build(); } }) .to(\"infinispan:myCacheName?cacheContainer=#cacheManager\") ;", "from(\"infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener\") .to(\"mock:result\");", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository(\"idempotent\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .idempotentConsumer(header(\"MessageID\"), repo) (3) .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository\" destroy-method=\"stop\"> <constructor-arg value=\"idempotent\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <idempotentConsumer idempotentRepository=\"infinispanRepo\"> (3) <header>MessageID</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository(\"aggregation\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .aggregate(header(\"MessageID\")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy(\"myStrategy\") .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository\" destroy-method=\"stop\"> <constructor-arg value=\"aggregation\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <aggregate aggregationStrategy=\"myStrategy\" completionSize=\"3\" aggregationRepository=\"infinispanRepo\"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-infinispan-embedded-component
Console APIs
Console APIs OpenShift Container Platform 4.13 Reference guide for console APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/index
Migrating Red Hat Update Infrastructure
Migrating Red Hat Update Infrastructure Red Hat Update Infrastructure 4 Migrating to Red Hat Update Infrastructure 4 and upgrading to the latest version of Red Hat Update Infrastructure Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/migrating_red_hat_update_infrastructure/index
Chapter 3. Project storage and build options with Red Hat Decision Manager
Chapter 3. Project storage and build options with Red Hat Decision Manager As you develop a Red Hat Decision Manager project, you need to be able to track the versions of your project with a version-controlled repository, manage your project assets in a stable environment, and build your project for testing and deployment. You can use Business Central for all of these tasks, or use a combination of Business Central and external tools and repositories. Red Hat Decision Manager supports Git repositories for project version control, Apache Maven for project management, and a variety of Maven-based, Java-based, or custom-tool-based build options. The following options are the main methods for Red Hat Decision Manager project versioning, storage, and building: Table 3.1. Project version control options (Git) Versioning option Description Documentation Business Central Git VFS Business Central contains a built-in Git Virtual File System (VFS) that stores all processes, rules, and other artifacts that you create in the authoring environment. Git is a distributed version control system that implements revisions as commit objects. When you commit your changes into a repository, a new commit object in the Git repository is created. When you create a project in Business Central, the project is added to the Git repository connected to Business Central. NA External Git repository If you have Red Hat Decision Manager projects in Git repositories outside of Business Central, you can import them into Red Hat Decision Manager spaces and use Git hooks to synchronize the internal and external Git repositories. Managing projects in Business Central Table 3.2. Project management options (Maven) Management option Description Documentation Business Central Maven repository Business Central contains a built-in Maven repository that organizes and builds project assets that you create in the authoring environment. Maven is a distributed build-automation tool that uses repositories to store Java libraries, plug-ins, and other build artifacts. When building projects and archetypes, Maven dynamically retrieves Java libraries and Maven plug-ins from local or remote repositories to promote shared dependencies across projects. Note For a production environment, consider using an external Maven repository configured with Business Central. NA External Maven repository If you have Red Hat Decision Manager projects in an external Maven repository, such as Nexus or Artifactory, you can create a settings.xml file with connection details and add that file path to the kie.maven.settings.custom property in your project standalone-full.xml file. Maven Settings Reference Packaging and deploying an Red Hat Decision Manager project Table 3.3. Project build options Build option Description Documentation Business Central (KJAR) Business Central builds Red Hat Decision Manager projects stored in either the built-in Maven repository or a configured external Maven repository. Projects in Business Central are packaged automatically as knowledge JAR (KJAR) files with all components needed for deployment when you build the projects. Packaging and deploying an Red Hat Decision Manager project Standalone Maven project (KJAR) If you have a standalone Red Hat Decision Manager Maven project outside of Business Central, you can edit the project pom.xml file to package your project as a KJAR file, and then add a kmodule.xml file with the KIE base and KIE session configurations needed to build the project. Packaging and deploying an Red Hat Decision Manager project Embedded Java application (KJAR) If you have an embedded Java application from which you want to build your Red Hat Decision Manager project, you can use a KieModuleModel instance to programmatically create a kmodule.xml file with the KIE base and KIE session configurations, and then add all resources in your project to the KIE virtual file system KieFileSystem to build the project. Packaging and deploying an Red Hat Decision Manager project CI/CD tool (KJAR) If you use a tool for continuous integration and continuous delivery (CI/CD), you can configure the tool set to integrate with your Red Hat Decision Manager Git repositories to build a specified project. Ensure that your projects are packaged and built as KJAR files to ensure optimal deployment. NA S2I in OpenShift (container image) If you use Red Hat Decision Manager on Red Hat OpenShift Container Platform, you can build your Red Hat Decision Manager projects as KJAR files in the typical way or use Source-to-Image (S2I) to build your projects as container images. S2I is a framework and a tool that allows you to write images that use the application source code as an input and produce a new image that runs the assembled application as an output. The main advantage of using the S2I tool for building reproducible container images is the ease of use for developers. The Red Hat Decision Manager images build the KJAR files as S2I automatically, using the source from a Git repository that you can specify. You do not need to create scripts or manage an S2I build. For the S2I concept: Images in the Red Hat OpenShift Container Platform product documentation. For the operator-based deployment process: Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators . In the KIE Server settings, add a KIE Server instance and then click Set Immutable server configuration to configure the source Git repository for an S2I deployment.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/project-storage-version-build-options-ref_decision-management-architecture
Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator
Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator You can install Red Hat Developer Hub on OpenShift Container Platform by using the Red Hat Developer Hub Operator in the OpenShift Container Platform console. 1.1. Installing the Red Hat Developer Hub Operator As an administrator, you can install the Red Hat Developer Hub Operator. Authorized users can use the Operator to install Red Hat Developer Hub on the following platforms: Red Hat OpenShift Container Platform (OpenShift Container Platform) Amazon Elastic Kubernetes Service (EKS) Microsoft Azure Kubernetes Service (AKS) For more information on OpenShift Container Platform supported versions, see the Red Hat Developer Hub Life Cycle . Containers are available for the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) Prerequisites You are logged in as an administrator on the OpenShift Container Platform web console. You have configured the appropriate roles and permissions within your project to create or access an application. For more information, see the Red Hat OpenShift Container Platform documentation on Building applications . Important For enhanced security, better control over the Operator lifecycle, and preventing potential privilege escalation, install the Red Hat Developer Hub Operator in a dedicated default rhdh-operator namespace. You can restrict other users' access to the Operator resources through role bindings or cluster role bindings. You can also install the Operator in another namespace by creating the necessary resources, such as an Operator group. For more information, see Installing global Operators in custom namespaces . However, if the Red Hat Developer Hub Operator shares a namespace with other Operators, then it shares the same update policy as well, preventing the customization of the update policy. For example, if one Operator is set to manual updates, the Red Hat Developer Hub Operator update policy is also set to manual. For more information, see Colocation of Operators in a namespace . Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > OperatorHub . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub Operator card. On the Red Hat Developer Hub Operator page, click Install . On the Install Operator page, use the Update channel drop-down menu to select the update channel that you want to use: The fast channel provides y-stream (x.y) and z-stream (x.y.z) updates, for example, updating from version 1.1 to 1.2, or from 1.1.0 to 1.1.1. Important The fast channel includes all of the updates available for a particular version. Any update might introduce unexpected changes in your Red Hat Developer Hub deployment. Check the release notes for details about any potentially breaking changes. The fast-1.1 channel only provides z-stream updates, for example, updating from version 1.1.1 to 1.1.2. If you want to update the Red Hat Developer Hub y-version in the future, for example, updating from 1.1 to 1.2, you must switch to the fast channel manually. On the Install Operator page, choose the Update approval strategy for the Operator: If you choose the Automatic option, the Operator is updated without requiring manual confirmation. If you choose the Manual option, a notification opens when a new update is released in the update channel. The update must be manually approved by an administrator before installation can begin. Click Install . Verification To view the installed Red Hat Developer Hub Operator, click View Operator . Additional resources Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator Installing from OperatorHub using the web console 1.2. Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator As a developer, you can deploy a Red Hat Developer Hub instance on OpenShift Container Platform by using the Developer Catalog in the Red Hat OpenShift Container Platform web console. This deployment method uses the Red Hat Developer Hub Operator. Prerequisites A cluster administrator has installed the Red Hat Developer Hub Operator. For more information, see Section 1.1, "Installing the Red Hat Developer Hub Operator" . You have added a custom configuration file to OpenShift Container Platform. For more information, see Adding a custom configuration file to OpenShift Container Platform . Procedure Create a project in OpenShift Container Platform for your Red Hat Developer Hub instance, or select an existing project. Tip For more information about creating a project in OpenShift Container Platform, see Creating a project by using the web console in the Red Hat OpenShift Container Platform documentation. From the Developer perspective on the OpenShift Container Platform web console, click +Add . From the Developer Catalog panel, click Operator Backed . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. Click Create . Add custom configurations for the Red Hat Developer Hub instance. On the Create Backstage page, click Create Verification After the pods are ready, you can access the Red Hat Developer Hub platform by opening the URL. Confirm that the pods are ready by clicking the pod in the Topology view and confirming the Status in the Details panel. The pod status is Active when the pod is ready. From the Topology view, click the Open URL icon on the Developer Hub pod. Additional resources OpenShift Container Platform - Building applications overview
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_openshift_container_platform/assembly-install-rhdh-ocp-operator
End of preview. Expand in Data Studio

πŸ–₯️ Red Hat Technical Documentation Dataset

πŸ“Œ Overview

This dataset contains 55,741 structured technical documentation entries sourced from Red Hat, covering:
βœ… System Administration Guides – User management, permissions, kernel tuning
βœ… Networking & Security – Firewall rules, SELinux, VPN setup
βœ… Virtualization & Containers – KVM, Podman, OpenShift, Kubernetes
βœ… Enterprise Software Documentation – RHEL, Ansible, Satellite, OpenStack

πŸ“Š Dataset Details

This dataset is designed for training Large Language Models (LLMs) for enterprise IT automation and troubleshooting.
It can be used to build Linux-focused AI assistants, automated system administration tools, and knowledge-retrieval bots.

πŸ› οΈ Potential Use Cases

  • 🏒 Enterprise Linux AI Assistant: Train an LLM for automated troubleshooting & IT helpdesk.
  • πŸ—οΈ DevOps & Automation: Enhance Ansible playbook generation & infrastructure automation.
  • πŸ” Question Answering & Chatbots: Fine-tune a model for IT Q&A systems using RAG.
  • πŸ“š Enterprise Documentation Search: Build a retrieval-augmented system for sysadmins.

🏷️ Dataset Schema

{
    "title": "str",      # Title of the documentation section  
    "content": "str",    # Full-text content of the technical guide  
    "commands": "List[str]",  # Extracted shell commands & configurations  
    "url": "str"         # Original Red Hat documentation source  
}
Downloads last month
47